Cyborgs Autocare

The automotive aftermarket refers to the secondary market of the auto industry and covers retailing and distribution of parts, accessories and chemicals after the sale of the automobile. This US automotive industry is estimated to be worth $318.2B and continues to grow at 16% (as of 2015). The industry is moving towards consolidation as international players and tech companies are trying to get their technology into cars.

As tech companies vie to lay claim on the driver’s seat, our company is disrupting the maintenance and repair aftermarket of these connected vehicles (estimated to be 250 million by 2020). Cyborgs Autocare aims to use predictive maintenance to help create a better network of drivers, vehicle owners, auto mechanics and original equipment manufacturers. Cyborgs Autocare will monitor automotive performance indicators such as to predict when and why a failure is likely to occur and the potential impact of this failure. For a driver, this will help pre-emptively plan their vehicle downtime and automatic maintenance scheduling. Service centers will now be able to optimize spare parts inventory, preempt service schedules and improve overall customer service. For original equipment manufacturers, this will help combat the duplicate parts market better and minimize warranty claims.

As cars continue to become more software dependent, we propose leveraging the stream of performance data these cars are able to transmit in order to develop a predictive maintenance solution that would allow both drivers to be warned in advance of failing equipment as well as dealerships and repair shops to be notified of drivers wanting to come in and equipment needed to service their cars. Using sensors embedded into the car’s system, our product would collect real time data on the engine, exhaust, braking system, airbags, and transmission among other components. This data will then feed into our algorithm that has been trained on historical data to predict the likelihood of needed maintenance.  If the likelihood is above a certain threshold, the driver will receive a message on the car dashboard indicating the need and reason to take the car in and the ability to call or schedule an appointment with nearby repair shops or the dealership the car was purchased from.

This solution could also be important for autonomous cars, as it would provide a crucial safety mechanism by automatically directing one of these cars to the nearest repair shop if the algorithm detected any abnormalities.

To demonstrate our product, we would partner with a ride-sharing company, and segment a portion of the ride-share fleet into a number of different pools. The pools would have a comparable make-up in terms of vehicle make, model, mileage, and age. Half of the pools would continue with their maintenance methods as-usual (fixed schedule, and reactive). The other half of the pools would use the predictive model to pre-emptively indicate when maintenance should be performed. Drivers would share all records of maintenance procedures and associated costs. At the end of a certain period of time and number of miles traveled, the results would be compared, looking at the number of failures and associated costs.  Variances in the number of miles traveled by the various cars in each pool would be controlled for.  The hypothesis is that the pools utilizing the preventative maintenance modeling system will experience lower failure rates and lower maintenance expenses. If successful, this empirical demonstration would help promote the merits of the solution.

For collecting the data, we would consider two strategies: using publicly available historical data and collecting data directly from the vehicles.  One example of publicly available information is the NHTSA complaint database (https://www-odi.nhtsa.dot.gov/owners/SearchSafetyIssues), which contains structured information and text complaints on vehicles going back to the 1950’s.   We would use natural language processing techniques to extract specific part failures from the text information. In addition, by 2020, more than 250 million cars will be connected to the Internet (http://www.gartner.com/newsroom/id/2970017).  Assuming we are able to negotiate a partnership with vehicle manufacturers, we would be able to continuously collect information logged on parts as it is sent from the vehicle, and vastly increase the our database.

In terms of creating predictive models for maintenance, we would combine several techniques to develop an overall picture of vehicle health.  One method would rely on survival analysis models for individual parts, which estimate the expected duration of time until a part fails.  In addition, we would create a Markov chain model that would tell us what part is likely to fail next given existing information and previous failure history.  These two models can be combined to create an overall vehicle health index.   In addition, we are confined in this approach because these models have been proven to work in other industries such as oil and gas (http://www.ospreydata.com/architecture/).

SMaRT Pantry

The Problem: Americans are Cooking Less

In 2015, Americans spent more on eating out than they did on groceries[i]. When asked why they are cooking less, people’s answers range from not having the right ingredients to not being able to cook to not having the time to look for something quick and easy. Our new technology, SMaRT Pantry, aims to solve all of those issues.

 

The Solution: Curated Menus for the Average Joe

The Simple Meals and Recipes Tonight (SMaRT) Pantry uses machine-learning techniques to provide consumers with access to recipes that meet their flavor and time preferences while only using the items they have in their kitchen. By taking user preferences, similar customer data, and pantry contents, SMaRT Pantry generates customized dinner solutions – it’s like having a personal chef hand-pick every night’s dinner menu.

How It Works

Step 1: Pantry contents (as provided by store receipts or manually entered), favorite recipes, food allergies and dislikes, total cook-time preferences, health and budget desires, and preferred difficulty level are uploaded into the SMaRT Pantry app.

Step 2: SMaRT Pantry uses your historic data (ratings, similar-user preferences, recipe characteristics) and, with state-of-the-art machine learning technology, returns a personalized set of suggested recipes. Depending on your settings, this list may include recipes using only what’s currently found in your pantry, or it may generate a grocery list that allows you to pick up a few key items to execute the perfect recipe.

Step 3: Simply rate your meal to improve future recommendations. With additional use SMaRT Pantry gets better at personalizing recipes for you: providing new ideas and additional variety to meals you would normally cook.

 

How It Works: SMaRT Pantry

 

Additional Features

  • On the day of your choosing, SMaRT Pantry will provide you with a suggested grocery list, taking into account previous purchase behavior, current stock of pantry items, and potential recipes you could make with a few additional ingredients. With the click of a button, it can even directly order those items to your door.
  • Push notifications will provide you with information on what products have been sitting in the pantry for a while and likely need to be used up before reaching their expiration dates.
  • SMaRT Pantry could even be expanded to incorporate other augmented perception technology (e.g. smart fridges, RFID tags, or other sensors) to automatically identify pantry contents, or internet of things appliances to assist in recipe execution (e.g. preheating the oven for you or notifying you to start your slow cooker in the morning).

 

Demonstration Design: SMaRT Pantry vs. Personal Chef

To demonstrate the effectiveness of the SMaRT Pantry we propose a cooking challenge in which the target consumer provides information on her favorite recipes, allergies, and preferences to a professional personal chef. This chef, using a typical pantry, picks a recipe and prepares a meal for her. SMaRT Pantry takes the same information as the chef and, using its database of other users and user preferences, selects a recipe. A second professional chef will prepare this meal. The outcome? Our target customer tastes both meals and sees that SMaRT pantry is better able to predict what she likes. In other words, SMaRT Pantry is better than a personal chef picking your menu every night! The bonus, of course, is that she can use SMaRT Pantry herself and pick a recipe in a fraction of the time that it would usually take.

We could pilot this demonstration either directly for investors (showing them the potential value of this product) or to random potential customers. We could then use those results in advertising, showing testimonials where new users rave about how much the SMaRT Pantry understands their preferences. Our marketing could then be based around the idea of “having a personal chef pick your menu every night.” This gets to the core technology of the system – the data-based approach to choosing a meal that fits every individual’s needs and wants.

 

[i] Americans Officially Spend More at Restaurants Than Grocery Stores

 

By Anecdotal Evidence – Allison Miller, Patrick Miller, and Jordan Bell-Masterson

Blue River Technology

Blue River Technology – Solution Profile (Shallow Blue)

Maker of See & Spray and LettuceBot

Outline of the Problem

Drone equipped with See & Spray technology.

See & Spray

At large farms, it is standard practice to spray an entire field evenly when a weed or pest problem emerges.  This uniform treatment of the crop can result in excessive use of pesticide and herbicide.  There are also environmental and food contamination concerns regarding the use of these pesticides and herbicides. The goal of See & Spray is to decrease the amount of chemical use in agriculture.

Blue River Technology (BRT) developed agricultural machines that utilize machine learning to distinguish between weeds and plants based on their size, shape, and color as the machines drive over fields.  The machines spray the chemicals in the exact spots they are needed, preventing chemical overuse. The robotics technology allows the smart machine to precisely spray herbicide on the field.

Tractor using LettuceBot thinning system

LettuceBot

At times, farmers tend to over plant certain crops, and so will thin out the crop to improve overall yield. Thinning out the crop is a labor-intensive and expensive process.

LettuceBot is a BRT machine-learning powered machine that is able to photograph 5,000 plants a minute, “using algorithms and machine vision to identify each sprout as lettuce or a weed.”  The plants can be identified by graphics chip in just .02 seconds. LettuceBot is also able to determine whether crops have been planted too close to each other, which could inhibit their growth.  If that is the case, it will spray and kill one of the plants without harming the other, increasing overall crop yield. This automates a normally labor-intensive process.

Evaluation of Effectiveness

BRT claims the See & Spray technology can decrease chemical use by a factor of 10. The accuracy of the machine is within a quarter of an inch. This can result in both cost savings to farmers in terms of fewer pesticides, and fewer environmental and food contamination concerns. However, most pesticides are fairly inexpensive, so cost savings are not huge, and we view this product as only moderately effective.

Given the normally manual nature of lettuce thinning, LettuceBot produces significant labor cost savings for farmers. LettuceBot is currently in use on 10% of the lettuce fields in the U.S., and this relatively wide adoption highlights how this product has been very successful so far.

Proposed Alterations

Below are six proposed alterations for BRT’s machines:

  1. Improve specificity in identifying weeds

There are many different species of weeds that all have different optimal control agents, and weeds are starting to get more resilient; it is therefore becoming more important to identify exactly which weed is growing. BRT could incorporate multiple herbicides into its See & Spray machines and tailor them to the specific weeds identified.

2. Introduce nutrient spraying in addition to herbicide spraying

Given BRT’s existing technology, it should be fairly simple to incorporate targeted nutrient spraying for certain plants, such as plants that look small or weak. This would add an additional value proposition for farmers that does not already exist, as they could accomplish both tasks in one go.

3. Incorporate soil analysis into herbicide and nutrient spraying decision

The efficacy of certain herbicides can depend on the type of soil that the crops are growing in. Analyzing soil can allow for supplementation of nutrients for optimal crop growth.

4. Expansion of LettuceBot to other crop types

BRT should leverage its machine learning algorithms to teach its products to identify other plants, so its products can be used to thin multiple types of crops beyond just lettuce.

5. Market “Low-Pesticide” products

Given the recent popularity of organic foods, BRT should encourage their customers to promote the fact that their crops use 90% fewer pesticides. This would appeal to a health-conscious market and generate stronger sales for farmers, and thereby increase demand for BRT’s products.

6. Sell crop data to third parties

BRT’s products currently gather a wealth of data on the real-time quality of plants across all of its customers. This data could potentially be aggregated at a trend level and resold to financial groups (while protecting individual farmer anonymity) such as hedge funds that are trading agricultural futures.

IBM Sports Analytics, powered by Watson

Image result for ibm sports analytics

The field of sports analytics is rapidly emerging as more and more data is being collected on everything from athlete performance to venue operations. In order to optimize the use of all of this information, IBM created a Sports and Entertainment Practice.

Many people think of IBM’s Watson as a traditional smart computer that can sort through large amounts of data for improved decision making power, but few know about the vast array of possible uses of this machine. Their Internet of Things creates solutions in four key areas: improving athlete performance by leveraging real-time insights, predicting team dynamics and financial outcomes with advanced analytics, creating an elevated fan experience, and optimizing venue operations. Watson IoT helps improve player performance by providing athletes and coaches with performance, biometrics and weather insights in real time, empowering athletes with real-time feedback via connected devices, and creating immediate visibility into athlete’s training and performance. This technology also allows team managers to evaluate individual and team performance through an improved ability to view, organize and uncover insights through data, as well as create a system of automatic evaluations of the team’s current roster and potential player changes. Watson also assists in improving the fan experience by integrating structured and unstructured data to create personalized visitor experiences, and gain deeper insights into fan experiences and behaviors. Finally, this system assists teams in optimizing venue infrastructure by establishing pervasive connectivity in and around the venue and aligning the facilities to business outcomes.

Image result for ibm sports analytics

IBM has been able to make a name for itself in the sports analytics space through powerful partnerships with such companies as the Weather Channel and AT&T, and various successful projects with Wimbledon, the USA cycling team, and the Toronto Raptors, to name a few. According to Roger Wood, founder of the San-Francisco think tank Art+Data, IBM is among the top 10 most successful sports-oriented companies using data to change the game. According to Wood, the best companies are those that utilize the power of real-time information to their advantage. Others on the list include Ruckus/Oracle, Nike+, and Sportvision.

Watson is still, however, a work in progress. Where it can improve is taking the aggregated data and creating actionable recommendations to coaches, players and teams. Currently, Watson is a great tool to help coaches keep up to date with the latest research in important topics like sleep, recovery, altitude training and performance nutrition, but these coaches still need to take the information provided (perhaps with the helps of an IBM team of consultants) and decide what to do with it. It is therefore up to the athletes whether Watson works well for them, because data without insights does not improve performance.

In identifying IBM’s competitors in this space, it is also worth noting that Cisco created the Cisco Connected Athlete where they “turn the athlete’s body into a distributed network of sensors and network intelligence.” This solution allows athletes to access real-time data on factors such as pace, power, and drive so that they can improve their performance. It is likely that more large players will move into this space as analytics is quickly becoming a key ingredient for success in professional sports. Major sports leagues are trending toward the use of technology, as shown in the recent approval of the Whoop wearable in MLB. The NBA however is still resisting the use of wearables in the game, showing that this recent trend is moving forward, but cautiously. One concern with the data that has become available through IBM sports analytics and other available technologies, such as WHOOP, is that the product doesn’t help people who wear it as much as it helps the institutions and coaches who pay for it. WHOOP has become popular among college teams, and there is rising concern about privacy issues as student-athlete’s personal information becomes available to others for the sake of a potential boost in performance.

In order to remain competitive, IBM should consider improving the value of their offering by providing a comparable service to individuals and non-professional athletes, much like Cisco does. This would allow IBM to reach a broader audience, potentially using readily available tools such as the iPhone. This project would be particularly suited to IBM because they have a background in both hardware and software which, paired with their robust data analytics capabilities, could allow them create an impressive and attractive consumer product. Additionally, there is a proven market for consumer sports analytics products, evident in the success of the Fitbit.

Sources:

https://www.ibm.com/internet-of-things/iot-zones/sports-analytics/

http://www.cisco.com/c/en/us/solutions/collateral/service-provider/mobile-internet/white_paper_c11-711705.html

https://www.fastcodesign.com/1671570/10-companies-on-the-cutting-edge-of-sports-data

https://www.wareable.com/sport/nba-star-caught-out-using-banned-wearable-during-games-2558

https://thelocker.whoop.com/2017/03/06/whoop-approved-for-in-game-use-in-major-league-baseball/

http://www.latimes.com/business/la-fi-thedownload-ibm-watson-athletes-20151003-story.html

http://deadspin.com/why-is-this-wearable-tech-company-helping-college-teams-1794218363

Wear – You are what you wear, and you wear what you are

 

 

The problem

When it comes to apparel, there are adventurous and conservative shoppers. Adventurous shoppers spend hours researching, browsing, experimenting, and trying on different styles of fashion, and have fun doing it. Conservative shoppers just want to occasionally purchase slightly different hues of the clothes they have worn for years, and the idea of shopping fills them with dread. They like to stick with the familiar because they often cannot visualize how different styles of clothes can look. The problem is often not that these shoppers are unwilling to wear different apparel, but that they are unwilling to put in the search costs. The unwillingness to experiment constrains the growth the $225bn apparel market and contributes to a fashionably duller world.

Our wardrobes for example

The solution

The team proposes an augmented judgment system that uses a shopper’s non-apparel preferences to predict apparel preferences that they may not be aware of. For example, if [Wear] knows that a shopper likes James Bond movies, enjoys wine tastings, drives a Mercedes, spends 45 minutes every morning for personal grooming, and prefers to eat at Grace, it would predict the type of apparel the shopper would like through a model that connects non-apparel preferences toapparel tastes. [Wear] would then remove apparel styles already owned by the shopper from the output to form a set of recommendations of new styles the shopper may like. The result would be distinct from that produced for a shopper who likes Mission Impossible movies, enjoys dive bars, rides a Harley Davidson, spends 5 minutes on personal grooming, and prefers to eat at Smoque BBQ. This model will use augmented perception techniques to understand different parts of the user’s environment that provide insight into their preferences. This algorithm will generate apparel recommendations that fit the needs and preferences of shoppers who may not understand their own preferences, leading to higher consumer willingness to purchase new apparel and higher industry sales.

 

The demonstration

The team would like to produce a working prototype of the system to illustrate its effectiveness in real time. The team envisions asking an audience member to fill out a basic survey on non-apparel preferences. The audience member would then be asked to predict a shirt he would like the most from a selection of 10 shirts, while the [Wear] algorithm will simultaneously predict his preferences. The person would then be presented with both shirts to try on and provide feedback on which one he liked more.

 

Sources:

https://www.statista.com/topics/965/apparel-market-in-the-us/

https://my.pitchbook.com/#page/profile_522553643

Toronto Raptors Digital War Room

Toronto Raptors Digital War Room

By Team CodeBusters

The NBA Draft is a nerve-wracking lottery, and not just in how the first few picks are awarded. Under time pressure, teams are making multi-million dollar bets on which players will become high performing professionals. The hurdles and pitfalls of the wrong decision are numerous, but the potential rewards for picking right are huge. How should a team choose the right player? Historically, teams used highly manual, slow processes prone to human biases and errors including Excel spreadsheets of player metrics, scouting, interviews, and exercise drills. At best, these factors offer a partial snapshot of a player’s future performance at the NBA level.

In February 2016, the Toronto Raptors unveiled their digital war room that captures data from advanced cognitive technology from Watson. IBM’s Sports Insights Central solution utilizes real-time data to assess a player’s organization fit and advise the Raptors on which player to choose.  The system pulls in unstructured data from a variety of sources, including the players’ statistics, medical records, and social media profile, and then quickly assesses how well the player would meet the organization’s specific needs. The tool is geared towards getting a more holistic view of the team: Watson is able to quickly analyze a player not only in isolation, but how he would perform in combination with other draft picks and players already on the team. The Raptors are even able to assess cultural fit in a more rigorous manner, using profiles created from social media habits using the Watson Personality Insights tool. The war room is meant to complement, not replace, the decision making skills of coaches and managers.  The war room features visualizations and other dashboards that present data to managers in a digestible way.  Humans are still making the final decisions, but are doing so with much more relevant and timely information.

We have three primary concerns about the effectiveness of the Raptors’ current program: 1) difficulty measuring impact, 2) long, unclear commercial payoff, and 3) the efficacy of existing tools for critical choices. Since the implementation of the program, the Raptors have demonstrated marginal improvement in their win-loss record – illustrated in the table below. Beyond this, it is difficult to construct a useful framework to measure impact, especially from a commercial perspective. Often, commercial success is dominated by non-performance factors. Despite many losing seasons, Forbes rates the Knicks the NBA’s most valuable franchise simply because they play in New York city. Star players are drafted at the ages of 18-19 and tend to be 7-8 years away from their prime. The length of this horizon contributes to measurement challenges and diminishes the investment’s attractiveness since payoffs are many years away. Finally, basketball is uniquely dominated by the superstar. Unlike other team sports, single players can dominate their teams and the league, overwhelming draft room improvements. These talents tend to be well-identified and are selected at the top of the draft. Even the smartest draft room has no chance of selecting Kevin Durant, Russell Westbrook, Anthony Davis, etc. unless the franchise is bad enough to earn a top draft slot. In contrast, the most basic draft room would have made the right choice on the most important draft selection of the last 15 years: LeBron James. It is unclear whether superstars who have emerged from later in the draft (Kawhi Leonard, Jimmy Butler) were cases of prescient player selection or fantastic coaching and player development.

While determining a player’s “fit” with the raptors based on Watson’s personality Insights is a nice first step, the team could go further and solicit feedback from current raptor’s players and personnel to crowd source a fit score. The team could also target players based on their publicity and sponsorship potential, and target players who will put the most fans in the seats, increasing the revenue and profit of the team. The existing draft process is very competitive, and in basketball the game-changing recruits are generally well known to all teams, so there may be limited value and delta in applying this model to drafting, but it may be useful in other areas teams haven’t solved as much. For instance, using the model to target veterans that can help develop your younger players develop and teach them good habits. The competitive focus around the draft may limit the edge the model provides the raptors, so finding novel data to utilize or creative applications of the model may give them more of an advantage.

 

Appendix I (Win Statistics)

Year Win Lose
2012 48 34
2013 48 34
2014 48 34
2015 56 26

 

Links:

http://www.itworldcanada.com/article/toronto-raptors-unveil-a-digital-war-room/380686

https://techcrunch.com/2016/02/10/ibm-watson-teams-with-toronto-raptors-on-data-driven-talent-analysis/

http://techportfolio.net/2016/05/how-much-is-watson-ai-helping-the-raptors/

https://motherboard.vice.com/en_us/article/toronto-raptors-nba-draft-day-ibm-watson

http://grantland.com/features/the-toronto-raptors-sportvu-cameras-nba-analytical-revolution/

https://www.ted.com/talks/rajiv_maheswaran_the_math_behind_basketball_s_wildest_moves#t-419990

Knewton Adaptive Learning Technology

Problem Outline

Knewton was founded in 2008 by Jose Ferreira, a former executive at Kaplan, Inc, to allow schools, publishers, and developers to provide adaptive learning for every student. Knewton believes that no two students are alike in their background or learning styles and education needs to be altered to cater to every child’s strengths and weaknesses. Knewton draws on each student’s history, on interests of students with similar learning styles, and on decades of research on improving learning experiences, to recommend the next best course/activity for the student to maximise his/her learning. By doing so, Knewton has helped Arizona State University (among others) increase pass rates by 17%, reduce course withdrawal rates by 56%, and accelerated learning as 45% of the students finished a course 4 weeks early.

Solution

Knewton utilizes adaptive learning technology to create a platform that allows educational institutions and software publishers to tailor educational content for personal use. Started as a an online test prep software, Knewton now aims to identify the next best step in the user’s learning journey. By partnering with leading universities in the US and publishers like Pearson, the adaptive learning platform aims to end the one-size fits all curriculum making personalized curriculum accessible across K-12 and college education. Knewton’s solution offers a two pronged approach on curriculum recommendation guiding students on what the next best thing to learn and how they should do it. The recommendations can be used to drive the complete learning experience or can serve as tailored remediations in response to test performances.

This is achieved through data. Once a student logs in on the platform, every keypress and mouse movement is recorded as a part of the clickstream to understand their behaviour.

The adaptive learning algorithm then uses this data to understand different dimensions of the learning experience such as engagement, proficiency, boredom and frustration measured through time spent on learning modules, error rates, assessments taken etc. For instance, Knewton uses the item response theory to assess and compare proficiency based on an individual’s responses to quizzes as compared the overall test taker’s demographic.

Evaluate Effectiveness and Commercial Promise

Knewton has chosen to place itself as an adaptive learning platform that partners with educational content providers to create personalized learning experiences. Their partners include Houghton-Mifflin, Pearson, and Triumph Learning, which has given them considerable weight in the US market. In addition, they have served 13 million students worldwide through their platform, as they’ve also targeted developing markets where there are less structural education initiatives that need to be overcome. Finally, Knewton has also been working on creating partnerships with MOOC’s as well as universities. Results reported by Knewton on their partnership with Arizona University in developmental math courses show that pass rates increased by 11 percentage points while withdrawal rates decreased by 50%.

Knewton’s competitors in the adaptive learning space include Kidaptive, McGraw-Hill Education, Smart Sparrow, an Australian based company, Dreambox Learning, and Desire2Learn among others. While each competitor has its own set of results and wins, it is notable that Smart Sparrow has reported reducing failure rates from 31% to 7% in a mechanics course and they are also working with Arizona State University. So while Knewton has seen promising results from its platform and while they do have a lot of traction, competitors are able to get similar if not better results. One pseudo competitor that Knewton could think about partnering with would be alternative schools, such as AltSchool, as charter schools and alternative methods of education become increasingly popular. This would give them another avenue to leverage their platform while also giving them an edge over current competitors.

Proposed Alterations

  • Where students sit in a classroom
    • Knewton is using Engagement modeling to determine how engaged virtual students are. The same methodology could be extended to the classroom.
    • Using photo sensors, Knewton could incorporate classroom seating location into their analytics. Perhaps it could be determined whether a student learning is affected by where they sit in a classroom relative to the teacher and other students.
  • Integration with standardized testing
    • The Knewton adaptive ontology can be used to better understand student preparedness for standardized testing,  and the effectiveness of standardized testing. Particularly the assessment and prerequisiteness relationships, which provide a view on student understanding of concepts requiring understanding of previous concepts.
    • The Knewton tool could help standardized test developers prove that the concepts intended to be tested are indeed those being tested. It could also help student prepare for the test.
  • Integration with student loan underwriters
    • Results at Arizona State University indicate significant improvements in withdrawal rates. Non-completion of degree programs is the leading cause of student loan defaults.  Knewton insights could be used as an indicator of student loan default risk.
    • Data privacy may be an issue at the individual level.

Team: Cyborbs

Members: Alisha Marfatia
, Paul Meier, 

Sakshi Jain, Scott Fullman, 

Shreeranjani Krishnamoorthy

Tesla Autopilot Technology

Opportunity & Solution Summary

Beyond the enormous societal benefit of reducing traffic collisions, a connected fleet of autonomous vehicles allows for more predictable, efficient traffic flow; improved mobility and productivity among travelers; and–eventually–a business model shift from outright vehicle ownership to ‘transportation-as-a-service’.

Looking ahead, the National Highway Traffic Safety Administration (NHTSA) created a five-level classification system of autonomous capabilities to measure progress and innovation:

In October 2015, Tesla Motors pushed software version 7.0 to its Model S customers, which included Tesla Autopilot, the most advanced publicly-available autonomous driving software.

While many companies have developed autonomous capabilities (particularly Google, who, as the first-mover, logged 1 million fully-autonomous miles before Tesla launched Autopilot), Tesla’s software has uniquely iterated and addressed the changing needs of the user to become the superior solution.  Interestingly, 20+ automakers have more autonomous driving patents than Tesla (mostly surrounding anti-collision and braking control mechanisms), but Tesla has been the first automaker to provide substantial Level 3 features in the marketplace.

This has enabled Tesla to leverage its thousands of drivers to quickly improve its algorithms via ensemble training.  By pushing these solutions to the market, Tesla has logged 50-fold more autonomous miles (supplemented by user feedback) than Google to boost algorithm performance.  In the short run, this means improving vehicle efficiency and customers safety.  In the longer run, this means reaching full self-driving automation (“Level 4”).  The software’s continuous learning technology enables the autonomous cars to update as new processes are observed from the user.

NVIDIA and Tesla together have fed millions of miles worth of driving data and videos to train the computer about driving.  Tesla leverages NVIDIA’s DRIVE PX 2 platform to run an internally-developed neural net for vision, sonar, and radar processing.  DRIVE PX 2 works in combination with version 6.0 of its deep-learning CUDA® Deep Neural Network library (cuDNN) and Tesla’s P100 GPU to detect and classify objects 10x faster than its previous processor, dramatically increasing the accuracy of its decision-making.

Effectiveness, Commercial Promise, and Competition

While Google’s technology is more precise–it’s LIDAR system builds a 360-degree model that tracks obstacles better than Tesla, and can localize itself within 10 centimeters–Tesla’s is publicly available at a reasonable price.  Tesla’s most recent hardware set includes forward-facing radar, as well as eight cameras and twelve sensors around the vehicle.  The company continues to roll out new features in regular over-the-air updates.

To date, Tesla’s continuous push of new/updated Autopilot features has been (largely) successful in improving consumer safety.  Following a 2016 investigation into a deadly crash involving a Tesla Model S (which was closed without issue), the U.S. Department of Transportation found Tesla’s Autosteer feature had already improved Tesla’s exemplary safety record, reducing accidents by 40%, from 1.3 to 0.8 crashes per million miles.

 

Tesla’s software algorithms are a short-run competitive advantage over other automakers; its technology is in the hands of more users, quickly improving its solution.  However, as full-autonomous driving becomes commoditized over 10-30 years, the automotive business model will shift from vehicle ownership to transportation-as-a-service and the competitive advantage will shift towards mass-market fleet vehicle manufacturers (e.g., Toyota, Ford, GM).  If vehicles aren’t owned by the end-user, and, instead, summoned or rented, the need for a superior driving experience drastically decreases in favor of the cheapest fare. Accordingly, GM invested $500M in Lyft last year to begin building an integrated on-demand network of autonomous vehicles.

Improvement and Alterations

Tesla has made progress since its first software push, but according to Elon Musk–the company is multiple years away from pushing out Level 4 capabilities.  Moving forward, Tesla’s biggest obstacles (beyond regulation) are better local road mapping; removing the need for user input; and stronger recognition of stop signs, traffic lights, and road updates.  In most geographies, many Autopilot features are geoblocked, restricting use primarily to highways and other major roads.  By training its software to better recognize stop sign images, as well as traffic light locations and color changes, Autopilot can be utilized in more local situations.  In addition, Tesla’s publicly-available vehicles are not yet truly autonomous, even on highways.  Vehicles have hands-on warnings that require the driver to be engaged throughout the ride, as well as a feature that shuts off Autopilot for the remainder of the drive cycle if the driver fails to respond to alerts (“Autopilot strikeout”).

 

Tesla’s Autopilot In Action

Blog post by Ex Machina Learners

Sources

Ackerman Evan. “GM Starts Catching Up in Self-Driving Car Tech with $1 Billion Acquisition of Cruise Automation.” IEEE Spectrum: Technology, Engineering, and Science News. N.p., 14 Mar. 2016. Web. 07 Apr. 2017.

“Autopilot.” Tesla, Inc. Apr. 2017.

Fehrenbacher, Katie. “How Tesla’s Autopilot Learns.” How Tesla’s Autopilot Learns. Fortune, 19 Oct. 2015. Web. 07 Apr. 2017.

Habib, Kareem. “Automatic Vehicle Control Systems.” U.S. Department of Transportation NHTSA Announcement. Jan. 2017.

“NVIDIA CuDNN.” NVIDIA Developer. N.p., 30 Mar. 2017. Web. 07 Apr. 2017.

Pressman, Matt. “Inside NVIDIA’s New Self-Driving Supercomputer Powering Tesla’s Autopilot.” CleanTechnica. N.p., 25 Oct. 2016. Web. 07 Apr. 2017.

Randall, Tom. “Tesla’s Autopilot Vindicated With 40% Drop in Crashes.” Bloomberg.com. Bloomberg, 19 Jan. 2017. Web. 04 Apr. 2017.

Vijh, Rahul. “Autonomous Cars – Patents and Perspectives.” IPWatchdog.com | Patents & Patent Law. N.p., 06 Apr. 2016. Web. 07 Apr. 2017.

Uptake!

Uptake, a Chicago-based data analytic firm was founded in 2014 by Brad Keywell and Eric Lefkofsky to develop locomotive-related predictive diagnostics. Its predictive analytics Software-as-a-Service platform aims to assist enterprises improve productivity, reliability and safety through the suite of solutions including predictive diagnostics and fleet management applications.

 

Every time a piece of equipment goes idle due to equipment failure or poor planning there are two costs: a) the cost of the repair in parts, labor, etc. and, b) the opportunity cost of lost revenue. There are also substantial costs involved with keeping contractors nearby while waiting for the machines to return to service. Downtime, scheduled or unscheduled, is essentially time that the site and the equipment is not earning back its investment costs.

 

Uptake platform uses machine learning combined with knowledge from industrial partners to deliver industry-specific platforms and applications to solve complex and relevant industrial problems like predicting equipment failure to result in enormous savings. It combines data science with massive data generated by plethora of sensors in these machines to understand signals and patterns that can develop predictive diagnostics. In addition, to shifting from a reactive ‘repair after failure’ mode to a proactive ‘repair before failure’ stance, Uptake also helps customers track fuel efficiency, idle time, location and other machine data.

 

Uptake has a very strong value proposition and commercial relevance. The company claims that its solution covers industry segments including rail, mining, agriculture, construction, energy, aerospace and retail. Its marquee client is Caterpillar which has also invested in the firm. Instead of building its own integrated services, Caterpillar shared all the know-how of its equipment and works with Uptake, which has more than 300 engineers, data scientists, and designers. Uptake has also, recently publicly announced its foray in the wind energy space by adding added two subsidiaries of Berkshire Hathaway Energy to its client roster: MidAmerican Energy Company and BHE Renewables Uptake’s current annual revenue run-rate exceeds $100 million and because of its unique algorithm and industry focus its valued at $2Bn.

 

While, Uptake generates immense value for construction equipment predictive diagnostics, it can further improve the prediction by also incorporating the environmental conditions like soil structure, site geometry, operating weather conditions, precipitation in air, etc. Through the use of sensors, these factors can be predicted even before the equipment is put to use and can thus, help in better estimating the wear and tear costs and time delays associated with a given project implementation. Using this “perceptive” data collected through sensors, the equipment firms can also manage their replacement inventory, thus further reducing the operational costs.

 

 

Uptake’s Products

 

 

Source: Company website

 


 

Presence across industries

Source: Company website

Sources:

https://uptake.com/products#2

http://chicagoinno.streetwise.co/2015/03/05/caterpillar-invests-in-uptake-the-groupon-and-brad-keywell-led-data-company/

http://siliconangle.com/blog/2017/02/01/predictive-analytics-startup-uptake-raises-40m-new-round/

http://bigdatanewsmagazine.com/2017/03/03/uptake-is-bringing-predictive-analytics-to-2-wind-energy-companies-chicago-inno-2/

https://www.forbes.com/sites/briansolomon/2015/12/17/how-uptake-beat-slack-uber-to-become-2015s-hottest-startup/#7cd7f1dc6cd0

http://autodeskfusionconnect.com/machine-2-machine-how-smart-apps-monitor-construction-site-and-equipment-for-better-project-margins/

http://www.bauerpileco.com/export/sites/www.bauerpileco.com/documents/brochures/bauer_bg_brochures/CSM.pdf

 

 

Anecdotal Evidence: Profile – Array of Things

 

Array of Things

A new urban initiative called Array of Things is attempting to be a “fitness tracker for a city” by installing sensors throughout the City of Chicago.

Array of Things Sensor

Problem: Local Pollution

The WHO estimates that urban air pollution, most of which is generated by vehicles, industry, and energy production, is estimated to kill 1.2 million people annually. While most of these deaths occur in developing countries, Chicago still faces significant issues: in 2016 Cook County was given an “F” for air quality by the American Lung Association. There are many pieces of this problem that Chicago is attempting to tackle, but one important aspect is understanding how air pollution affects citizen’s day-to-day lives and the varying effects and impact of different levels of pollution on different regions of the city. The goal of increasing understanding is to aid the city is developing additional programs to curb air pollution and to engage with the public to find solutions.

 

Map of Potential City Installations

Augmented Perception Solution

Array of Things is an effort (sponsored in part by the City of Chicago) to install hundreds of inexpensive, replaceable sensor devices across the city to track all sorts of pollution indices. These sensors use carbon monoxide detectors and pollen counters to measure air pollution and cameras and microphones to measure congestion and noise pollution. The data measured will then be both relayed to relevant departments in the City of Chicago and posted online to the public. The hope is that this data will help city planners better optimize planning decisions (e.g. traffic flow around a school or where to install a bike path) and potentially allow the public and academics to better understand the role hyper-local pollution has on citizen health and well-being. Besides focusing on air pollution, Array of Things is also striving to be a platform for monitoring a host of other city data. While the ultimate applications are unknown, they see the potential to leverage this sensor equipment to transform the way city planning decisions are made, not just from a health perspective.

First Installations

Array of Things Results

Results have been limited. The first machines were installed in late 2016 and data has yet to be made publicly available. That said, other cities are excited about this idea – with Seattle as a likely second city for installation and Bristol and Newcastle as the first international destinations.

Proposed Modifications

We have two major changes we would propose to this project. First, we would strive to solidify some of the goals and particularly the involvement with the city. While the City of Chicago has paid lip service to the project, there are no concrete changes that the city has agreed to make based on the results. Getting buy-in for making concrete changes (e.g. committing to help clean up the more polluted but populated areas of the city) before seeing results would help increase the chance that changes to improve citizen health would actually be made. Along those lines, creating concrete ratings to grade different areas in terms of pollution and broadcasting those ratings would help both incentivize local changes and increase awareness of high-pollution areas. Second, we would advocate for limiting the scope of the goal of Array of Things, at least in terms of its marketing/pitch. In most of their marketing, they describe the ability of their system to do everything from notifying individuals of ice patches to finding the most populated route for a late-night walk. While these are potential applications of their sensors (and we do not advocate removing any sensors), tailoring the vision to have more concrete and limited goals will make it successful in the near term. By trying to do everything at the same time, the effort risks overstating its value and missing out on the most impactful results, particularly those around pollution.

 

Sources:

https://www.nsf.gov/news/special_reports/science_nation/arrayofthings.jsp

https://news.uchicago.edu/article/2016/08/29/chicago-becomes-first-city-launch-array-things

http://www.bbc.com/news/technology-39229221

http://www.computerworld.com/article/3115224/internet-of-things/chicago-deploys-computers-with-eyes-ears-and-noses.html

https://gcn.com/articles/2017/03/07/sensor-net-resilience.aspx

http://www.who.int/heli/risks/urban/urbanenv/en/

http://www.lung.org/our-initiatives/healthy-air/sota/city-rankings/states/illinois/

http://www.govtech.com/fs/Array-of-Things-Expands-to-Cities-with-Research-Partnerships.html