Knewton Adaptive Learning Technology

Problem Outline

Knewton was founded in 2008 by Jose Ferreira, a former executive at Kaplan, Inc, to allow schools, publishers, and developers to provide adaptive learning for every student. Knewton believes that no two students are alike in their background or learning styles and education needs to be altered to cater to every child’s strengths and weaknesses. Knewton draws on each student’s history, on interests of students with similar learning styles, and on decades of research on improving learning experiences, to recommend the next best course/activity for the student to maximise his/her learning. By doing so, Knewton has helped Arizona State University (among others) increase pass rates by 17%, reduce course withdrawal rates by 56%, and accelerated learning as 45% of the students finished a course 4 weeks early.

Solution

Knewton utilizes adaptive learning technology to create a platform that allows educational institutions and software publishers to tailor educational content for personal use. Started as a an online test prep software, Knewton now aims to identify the next best step in the user’s learning journey. By partnering with leading universities in the US and publishers like Pearson, the adaptive learning platform aims to end the one-size fits all curriculum making personalized curriculum accessible across K-12 and college education. Knewton’s solution offers a two pronged approach on curriculum recommendation guiding students on what the next best thing to learn and how they should do it. The recommendations can be used to drive the complete learning experience or can serve as tailored remediations in response to test performances.

This is achieved through data. Once a student logs in on the platform, every keypress and mouse movement is recorded as a part of the clickstream to understand their behaviour.

The adaptive learning algorithm then uses this data to understand different dimensions of the learning experience such as engagement, proficiency, boredom and frustration measured through time spent on learning modules, error rates, assessments taken etc. For instance, Knewton uses the item response theory to assess and compare proficiency based on an individual’s responses to quizzes as compared the overall test taker’s demographic.

Evaluate Effectiveness and Commercial Promise

Knewton has chosen to place itself as an adaptive learning platform that partners with educational content providers to create personalized learning experiences. Their partners include Houghton-Mifflin, Pearson, and Triumph Learning, which has given them considerable weight in the US market. In addition, they have served 13 million students worldwide through their platform, as they’ve also targeted developing markets where there are less structural education initiatives that need to be overcome. Finally, Knewton has also been working on creating partnerships with MOOC’s as well as universities. Results reported by Knewton on their partnership with Arizona University in developmental math courses show that pass rates increased by 11 percentage points while withdrawal rates decreased by 50%.

Knewton’s competitors in the adaptive learning space include Kidaptive, McGraw-Hill Education, Smart Sparrow, an Australian based company, Dreambox Learning, and Desire2Learn among others. While each competitor has its own set of results and wins, it is notable that Smart Sparrow has reported reducing failure rates from 31% to 7% in a mechanics course and they are also working with Arizona State University. So while Knewton has seen promising results from its platform and while they do have a lot of traction, competitors are able to get similar if not better results. One pseudo competitor that Knewton could think about partnering with would be alternative schools, such as AltSchool, as charter schools and alternative methods of education become increasingly popular. This would give them another avenue to leverage their platform while also giving them an edge over current competitors.

Proposed Alterations

  • Where students sit in a classroom
    • Knewton is using Engagement modeling to determine how engaged virtual students are. The same methodology could be extended to the classroom.
    • Using photo sensors, Knewton could incorporate classroom seating location into their analytics. Perhaps it could be determined whether a student learning is affected by where they sit in a classroom relative to the teacher and other students.
  • Integration with standardized testing
    • The Knewton adaptive ontology can be used to better understand student preparedness for standardized testing,  and the effectiveness of standardized testing. Particularly the assessment and prerequisiteness relationships, which provide a view on student understanding of concepts requiring understanding of previous concepts.
    • The Knewton tool could help standardized test developers prove that the concepts intended to be tested are indeed those being tested. It could also help student prepare for the test.
  • Integration with student loan underwriters
    • Results at Arizona State University indicate significant improvements in withdrawal rates. Non-completion of degree programs is the leading cause of student loan defaults.  Knewton insights could be used as an indicator of student loan default risk.
    • Data privacy may be an issue at the individual level.

Team: Cyborbs

Members: Alisha Marfatia
, Paul Meier, 

Sakshi Jain, Scott Fullman, 

Shreeranjani Krishnamoorthy

Tesla Autopilot Technology

Opportunity & Solution Summary

Beyond the enormous societal benefit of reducing traffic collisions, a connected fleet of autonomous vehicles allows for more predictable, efficient traffic flow; improved mobility and productivity among travelers; and–eventually–a business model shift from outright vehicle ownership to ‘transportation-as-a-service’.

Looking ahead, the National Highway Traffic Safety Administration (NHTSA) created a five-level classification system of autonomous capabilities to measure progress and innovation:

In October 2015, Tesla Motors pushed software version 7.0 to its Model S customers, which included Tesla Autopilot, the most advanced publicly-available autonomous driving software.

While many companies have developed autonomous capabilities (particularly Google, who, as the first-mover, logged 1 million fully-autonomous miles before Tesla launched Autopilot), Tesla’s software has uniquely iterated and addressed the changing needs of the user to become the superior solution.  Interestingly, 20+ automakers have more autonomous driving patents than Tesla (mostly surrounding anti-collision and braking control mechanisms), but Tesla has been the first automaker to provide substantial Level 3 features in the marketplace.

This has enabled Tesla to leverage its thousands of drivers to quickly improve its algorithms via ensemble training.  By pushing these solutions to the market, Tesla has logged 50-fold more autonomous miles (supplemented by user feedback) than Google to boost algorithm performance.  In the short run, this means improving vehicle efficiency and customers safety.  In the longer run, this means reaching full self-driving automation (“Level 4”).  The software’s continuous learning technology enables the autonomous cars to update as new processes are observed from the user.

NVIDIA and Tesla together have fed millions of miles worth of driving data and videos to train the computer about driving.  Tesla leverages NVIDIA’s DRIVE PX 2 platform to run an internally-developed neural net for vision, sonar, and radar processing.  DRIVE PX 2 works in combination with version 6.0 of its deep-learning CUDA® Deep Neural Network library (cuDNN) and Tesla’s P100 GPU to detect and classify objects 10x faster than its previous processor, dramatically increasing the accuracy of its decision-making.

Effectiveness, Commercial Promise, and Competition

While Google’s technology is more precise–it’s LIDAR system builds a 360-degree model that tracks obstacles better than Tesla, and can localize itself within 10 centimeters–Tesla’s is publicly available at a reasonable price.  Tesla’s most recent hardware set includes forward-facing radar, as well as eight cameras and twelve sensors around the vehicle.  The company continues to roll out new features in regular over-the-air updates.

To date, Tesla’s continuous push of new/updated Autopilot features has been (largely) successful in improving consumer safety.  Following a 2016 investigation into a deadly crash involving a Tesla Model S (which was closed without issue), the U.S. Department of Transportation found Tesla’s Autosteer feature had already improved Tesla’s exemplary safety record, reducing accidents by 40%, from 1.3 to 0.8 crashes per million miles.

 

Tesla’s software algorithms are a short-run competitive advantage over other automakers; its technology is in the hands of more users, quickly improving its solution.  However, as full-autonomous driving becomes commoditized over 10-30 years, the automotive business model will shift from vehicle ownership to transportation-as-a-service and the competitive advantage will shift towards mass-market fleet vehicle manufacturers (e.g., Toyota, Ford, GM).  If vehicles aren’t owned by the end-user, and, instead, summoned or rented, the need for a superior driving experience drastically decreases in favor of the cheapest fare. Accordingly, GM invested $500M in Lyft last year to begin building an integrated on-demand network of autonomous vehicles.

Improvement and Alterations

Tesla has made progress since its first software push, but according to Elon Musk–the company is multiple years away from pushing out Level 4 capabilities.  Moving forward, Tesla’s biggest obstacles (beyond regulation) are better local road mapping; removing the need for user input; and stronger recognition of stop signs, traffic lights, and road updates.  In most geographies, many Autopilot features are geoblocked, restricting use primarily to highways and other major roads.  By training its software to better recognize stop sign images, as well as traffic light locations and color changes, Autopilot can be utilized in more local situations.  In addition, Tesla’s publicly-available vehicles are not yet truly autonomous, even on highways.  Vehicles have hands-on warnings that require the driver to be engaged throughout the ride, as well as a feature that shuts off Autopilot for the remainder of the drive cycle if the driver fails to respond to alerts (“Autopilot strikeout”).

 

Tesla’s Autopilot In Action

Blog post by Ex Machina Learners

Sources

Ackerman Evan. “GM Starts Catching Up in Self-Driving Car Tech with $1 Billion Acquisition of Cruise Automation.” IEEE Spectrum: Technology, Engineering, and Science News. N.p., 14 Mar. 2016. Web. 07 Apr. 2017.

“Autopilot.” Tesla, Inc. Apr. 2017.

Fehrenbacher, Katie. “How Tesla’s Autopilot Learns.” How Tesla’s Autopilot Learns. Fortune, 19 Oct. 2015. Web. 07 Apr. 2017.

Habib, Kareem. “Automatic Vehicle Control Systems.” U.S. Department of Transportation NHTSA Announcement. Jan. 2017.

“NVIDIA CuDNN.” NVIDIA Developer. N.p., 30 Mar. 2017. Web. 07 Apr. 2017.

Pressman, Matt. “Inside NVIDIA’s New Self-Driving Supercomputer Powering Tesla’s Autopilot.” CleanTechnica. N.p., 25 Oct. 2016. Web. 07 Apr. 2017.

Randall, Tom. “Tesla’s Autopilot Vindicated With 40% Drop in Crashes.” Bloomberg.com. Bloomberg, 19 Jan. 2017. Web. 04 Apr. 2017.

Vijh, Rahul. “Autonomous Cars – Patents and Perspectives.” IPWatchdog.com | Patents & Patent Law. N.p., 06 Apr. 2016. Web. 07 Apr. 2017.

Uptake!

Uptake, a Chicago-based data analytic firm was founded in 2014 by Brad Keywell and Eric Lefkofsky to develop locomotive-related predictive diagnostics. Its predictive analytics Software-as-a-Service platform aims to assist enterprises improve productivity, reliability and safety through the suite of solutions including predictive diagnostics and fleet management applications.

 

Every time a piece of equipment goes idle due to equipment failure or poor planning there are two costs: a) the cost of the repair in parts, labor, etc. and, b) the opportunity cost of lost revenue. There are also substantial costs involved with keeping contractors nearby while waiting for the machines to return to service. Downtime, scheduled or unscheduled, is essentially time that the site and the equipment is not earning back its investment costs.

 

Uptake platform uses machine learning combined with knowledge from industrial partners to deliver industry-specific platforms and applications to solve complex and relevant industrial problems like predicting equipment failure to result in enormous savings. It combines data science with massive data generated by plethora of sensors in these machines to understand signals and patterns that can develop predictive diagnostics. In addition, to shifting from a reactive ‘repair after failure’ mode to a proactive ‘repair before failure’ stance, Uptake also helps customers track fuel efficiency, idle time, location and other machine data.

 

Uptake has a very strong value proposition and commercial relevance. The company claims that its solution covers industry segments including rail, mining, agriculture, construction, energy, aerospace and retail. Its marquee client is Caterpillar which has also invested in the firm. Instead of building its own integrated services, Caterpillar shared all the know-how of its equipment and works with Uptake, which has more than 300 engineers, data scientists, and designers. Uptake has also, recently publicly announced its foray in the wind energy space by adding added two subsidiaries of Berkshire Hathaway Energy to its client roster: MidAmerican Energy Company and BHE Renewables Uptake’s current annual revenue run-rate exceeds $100 million and because of its unique algorithm and industry focus its valued at $2Bn.

 

While, Uptake generates immense value for construction equipment predictive diagnostics, it can further improve the prediction by also incorporating the environmental conditions like soil structure, site geometry, operating weather conditions, precipitation in air, etc. Through the use of sensors, these factors can be predicted even before the equipment is put to use and can thus, help in better estimating the wear and tear costs and time delays associated with a given project implementation. Using this “perceptive” data collected through sensors, the equipment firms can also manage their replacement inventory, thus further reducing the operational costs.

 

 

Uptake’s Products

 

 

Source: Company website

 


 

Presence across industries

Source: Company website

Sources:

https://uptake.com/products#2

http://chicagoinno.streetwise.co/2015/03/05/caterpillar-invests-in-uptake-the-groupon-and-brad-keywell-led-data-company/

http://siliconangle.com/blog/2017/02/01/predictive-analytics-startup-uptake-raises-40m-new-round/

http://bigdatanewsmagazine.com/2017/03/03/uptake-is-bringing-predictive-analytics-to-2-wind-energy-companies-chicago-inno-2/

https://www.forbes.com/sites/briansolomon/2015/12/17/how-uptake-beat-slack-uber-to-become-2015s-hottest-startup/#7cd7f1dc6cd0

http://autodeskfusionconnect.com/machine-2-machine-how-smart-apps-monitor-construction-site-and-equipment-for-better-project-margins/

http://www.bauerpileco.com/export/sites/www.bauerpileco.com/documents/brochures/bauer_bg_brochures/CSM.pdf

 

 

Anecdotal Evidence: Profile – Array of Things

 

Array of Things

A new urban initiative called Array of Things is attempting to be a “fitness tracker for a city” by installing sensors throughout the City of Chicago.

Array of Things Sensor

Problem: Local Pollution

The WHO estimates that urban air pollution, most of which is generated by vehicles, industry, and energy production, is estimated to kill 1.2 million people annually. While most of these deaths occur in developing countries, Chicago still faces significant issues: in 2016 Cook County was given an “F” for air quality by the American Lung Association. There are many pieces of this problem that Chicago is attempting to tackle, but one important aspect is understanding how air pollution affects citizen’s day-to-day lives and the varying effects and impact of different levels of pollution on different regions of the city. The goal of increasing understanding is to aid the city is developing additional programs to curb air pollution and to engage with the public to find solutions.

 

Map of Potential City Installations

Augmented Perception Solution

Array of Things is an effort (sponsored in part by the City of Chicago) to install hundreds of inexpensive, replaceable sensor devices across the city to track all sorts of pollution indices. These sensors use carbon monoxide detectors and pollen counters to measure air pollution and cameras and microphones to measure congestion and noise pollution. The data measured will then be both relayed to relevant departments in the City of Chicago and posted online to the public. The hope is that this data will help city planners better optimize planning decisions (e.g. traffic flow around a school or where to install a bike path) and potentially allow the public and academics to better understand the role hyper-local pollution has on citizen health and well-being. Besides focusing on air pollution, Array of Things is also striving to be a platform for monitoring a host of other city data. While the ultimate applications are unknown, they see the potential to leverage this sensor equipment to transform the way city planning decisions are made, not just from a health perspective.

First Installations

Array of Things Results

Results have been limited. The first machines were installed in late 2016 and data has yet to be made publicly available. That said, other cities are excited about this idea – with Seattle as a likely second city for installation and Bristol and Newcastle as the first international destinations.

Proposed Modifications

We have two major changes we would propose to this project. First, we would strive to solidify some of the goals and particularly the involvement with the city. While the City of Chicago has paid lip service to the project, there are no concrete changes that the city has agreed to make based on the results. Getting buy-in for making concrete changes (e.g. committing to help clean up the more polluted but populated areas of the city) before seeing results would help increase the chance that changes to improve citizen health would actually be made. Along those lines, creating concrete ratings to grade different areas in terms of pollution and broadcasting those ratings would help both incentivize local changes and increase awareness of high-pollution areas. Second, we would advocate for limiting the scope of the goal of Array of Things, at least in terms of its marketing/pitch. In most of their marketing, they describe the ability of their system to do everything from notifying individuals of ice patches to finding the most populated route for a late-night walk. While these are potential applications of their sensors (and we do not advocate removing any sensors), tailoring the vision to have more concrete and limited goals will make it successful in the near term. By trying to do everything at the same time, the effort risks overstating its value and missing out on the most impactful results, particularly those around pollution.

 

Sources:

https://www.nsf.gov/news/special_reports/science_nation/arrayofthings.jsp

https://news.uchicago.edu/article/2016/08/29/chicago-becomes-first-city-launch-array-things

http://www.bbc.com/news/technology-39229221

http://www.computerworld.com/article/3115224/internet-of-things/chicago-deploys-computers-with-eyes-ears-and-noses.html

https://gcn.com/articles/2017/03/07/sensor-net-resilience.aspx

http://www.who.int/heli/risks/urban/urbanenv/en/

http://www.lung.org/our-initiatives/healthy-air/sota/city-rankings/states/illinois/

http://www.govtech.com/fs/Array-of-Things-Expands-to-Cities-with-Research-Partnerships.html