Tribe: The “Pokemon Go” of fitness (Team Awesome)

  • Genesis and Overview

Our team initially did a review of Pokemon Go’s success as an augmented reality platform. The game was superimposed over a layout of a real-world location on a user’s cellphone, and would interact through augmented reality. The game was widely popular and one of its unintended side-effects was that it motivated a large number of players to be more physically active. As each of the different goals were in different locations, players had to walk significant distances if they were particularly competitive. While not the primary purpose of the game, it got us thinking that there was definitely an angle to be tapped on.

With the increased adoption of wearables, the cult-like fitness industry is ripe for an AI integration. Tribe is our vision of an adaptive workout platform that combines machine learning and augmented reality. Tribe knows about all your physical attributes, such as height, weight, how much you ate yesterday, how much you slept and even what your fitness goals are. Based on all these, it can propose a workout for you that’s tailored to what your body needs at that very point in time. Each person will have a completely unique interaction with Tribe.

  • The Problem

There are a whole range of fitness apps on the market now that attempt to guide users towards individual fitness goals. Most of them are fairly rigid and expect you to follow a set training program without any deviation. The classic couch-to-5k running guide is a clear example of this, where you can’t skip or modify a day’s workouts. Given the rigidity, it’s also not surprising that the attrition rate from these programs are remarkably high.

There are several solutions to the individual who desires varied, customized workouts.  seemingly simple solution to the individual that wants a customized, flexible workout, is to hire a personal trainer. However, that is a costly route which wouldn’t work for most people. To figure out how our platform could help, we tried to understand exactly what a personal trainer does, and how it could be replaced by AI. A personal trainer is usually well educated with a substantial background in kinesiology. They’re training style has also evolved based on the number of clients they’ve seen, and their ability to prevent injury before it happens, as well as understanding each client’s limits, and trying to push them farther. In essence, it was a skill that was based on interpretation after being experienced with multiple repetitions – sounds a lot like machine learning.

  • The Solution

Enter Tribe, a software based solution that provides the first-ever augmented reality workout. Tribe will leverage a system of virtual gyms similar to Poke gyms with particular workout functions at each gym. This means that as the user base grows, groups will exist at certain gym locations and users will be performing the same exercises simultaneously. While this is not a group workout, there is a social component to it. Members, in a sense, are part of a workout tribe, similar to those that have been popping up around the globe in the social fitness craze of the last decade.

App-based fitness challenges are still virtual. One of the most well known virtual experiences is Fitbit’s “Adventures.” Through adventures, users can take a virtual journey through a wilderness hike, motivating them to reach a step goal. The experience is based on images, and there is no integration with the user’s surrounding environment.

  • How it works

Step 1 – Collecting User Data. Users sign up to the platform and integrate it with their existing fitness tracker. For the purpose of this illustration, we’ll assume that Tribe is synced to a Fitbit. The user is then able to confirm the height and weight inputs that are retrieved from Fitbit as well as add their own personal information such as location, workout preferences and any past injuries. Based on all the information provided, Tribe is able to access historical fitness data from the Fitbit (heartrate, step count, workout intensity), and combine this with the workout goals to create a custom workout plan for the user.

Step 2- Location recommendation. When a user wants to have a workout, they can access Tribe and let it know that they’re trying to spend a given time on a workout – 60 min for example. Tribe will then process where you are, as well as the workout you need and give you directions to a workout location. Users will be able to elect from free locations (a public fitness area or an open field), or paid locations (a gym that Tribe has partnered with). The prompt to go to the location will be very similar to that of PokemonGo which engages the user in a game-like environment using augmented reality.

Step 3 – Workout Recommendation. Once you’re at the location, Tribe’s guided workout begins. This is where the platform’s core competency is. Tribe will already have all your data from the current day, and even the day before. It’ll be able to understand that you may have had a long night out with little sleep, or that you haven’t worked out in 4 days and have been sleeping 12 hours a day. Based on this information, as well as your physical attributes and goals, Tribe will be able to propose a workout that’s customized for you at that very point in time.

Step 4 – The Guided Workout. Very much like PokemonGo, Tribe will be able to guide you through your workout through augmented reality. Users will be able to place their phone in a visible place and see objectives on the screen. Perhaps an animated balloon 4 feet off the ground as a target for box jumps, or a cartoon character that holds a 2-minute plank with you. In addition to all this, the use of the front and back facing cameras on the phone will keep users honest by tracking their movements – very much like a more modern, portable version of Dance Dance Revolution.

  • Challenges and proposed alterations

One of the challenges for this app is that unlike Pokemon Go, there will need to be workout specific spaces that account for the safety of users. A gym cannot exist in the middle of an intersection, for example. An exercise that requires the user to impact the ground cannot be on concrete.

  • Marketing and partnerships

Some obvious marketing opportunities exist, starting with existing gyms and personal trainers. As the workouts could be anywhere, one potential location is existing gyms. Additionally, the trainers at the gym or independent trainers could craft workouts that can be integrated into the app and advertised to its users. Our target customer would already be interested in working out, and therefore our marketing campaigns will be targeted at the places they already are – in addition to gyms, also 5k races, farmers markets, and healthy food stores where athletes looking for more personalized workouts congregate. Lastly, we will use social media, particularly fitness influencers, to gain traction.

  • Funding Requirement and Timeline

While we will need $2M to fully develop this technology, we are asking for $200K in funding to get us to a first pilot in one mid-sized city. This will enable us to create the initial exercise programming prototypes, develop the initial software prototype, identify locations within a city for gym activities, and partner with local organizations to gain a following. Once we can prove the model in a single city, we will expand to additional geographies.

 

MediLinx

A source of inefficiencies is an opportunity for MediLinx

MediLinx aims to be the leading data solutions provider for the healthcare market in Latin America. To do so, MediLinx serves as a third-party SaaS provider for private insurance companies in Latin America. Claims currently are filled out manually, submitted via mail to insurance companies, and then manually re-entered into the computer. The MediLinx solution removes excess inefficiencies by supporting digital record creation, claim automation, and data analytics in medical claim processing. Through this, we will be able to streamline the claim process to make it more efficient and help prevent fraudulent claims.

In Latin America, the current claim processing is slow, outdated and inefficient, thereby generating high administrative costs for the insurance company.

We plan to pilot in Mexico, where we have found and analyzed the main pain points for the insurance companies:

  1. Claim processing is manual-intensive, requiring excess time spent reviewing and inputting claims
  2. Fraud (15% of claims)
  3. Hospitals take too long to fill and send needed forms

 

Entry strategy and Market Size

Our end customers are private insurance companies with operations within Latin America. Our initial target customer will be located in Mexico City, and once the concept is proven we will roll out to other major cities in Mexico.  After Mexico, we will scale through other major countries in Latin America. Most of our potential clients in Mexico have also a strong presence in Latin America, which will help us to achieve our international expansion.

The healthcare industry in Mexico is estimated in $77bn USD.  We are focusing on the private healthcare sector that holds a significant weight over the total expenditure (represents 45% of total health expenses).

Our client are private insurance companies. In Mexico there are over 40 health insurance companies, although 5 players control most of the market share.  They serve around 8% of the total population in Mexico, however their penetration in our initial market is much higher, 18%. This market is growing and is expected to double by 2030.

Product Overview

There are two main features of our product, 1) Digitization and 2) Automation.  The real value creation comes from the automation; nonetheless digitization is a main feature that enables the automation process.

The claim processing platform consists of a three user system 1) Insurance Company, 2) Patient, 3) Doctor.  The patient can upload their documents and fill out the claim forms directly on the platform, as well as the doctor.  There (s)he can make the requests to enter a claim for instant processing.

There are two components to the product the front end and back end.

  • Front End (Sensors): This is basically the platform for users, composed of three different dashboards.  Each dashboard is designed and built to meet each customer’s requirements of the product.  Here doctors will input the patient’s claim report.  Patients will have a dashboard to view, edit and sign the reports.  Insurance will be able to access the final reports with the analytical results attached to each one.
  • Back End (Algorithm): The core system is designed to process claims automatically.  The system will work through a machine learning process, by analyzing historic and new data to define the parameters that make a claim valid or invalid. The system will review the claim and the insurance policy of the patient in addition to his current status with the insurance to define the validity of the claim. The data is captured (digitized) through either by user-input in the platform or through a scanned picture of the documents. The system will process scanned documents and convert it to digital data, to then be analyzed by the automation process. Once a new claim goes through the system, it will automatically process the claim and determine if it is to be preapproved or subject to manual analysis. The system will build up the reports to output to the different user dashboards.  

The ask

The company requires 350k to launch.  30% Will be used for product development, which includes an MVP and the final platform.  We will use 20% for the pilot testing, before going into full development, to minimize risk.  The rest will be allocated for first year’s salaries and marketing expenses.

Team Dheeraj – AI’s got the feels

Alexa, sing me a song.

The Amazon echo is but one of several voice-powered devices that has been gaining wide customer appeal over the past year. During the holiday season, Alexa responds with a Christmas Carol. Imagine though if Alexa could respond with a song or playlist that corresponded with your mood just based on the tone of your voice.

Enter Beyond Verbal, a company going beyond data analytics to Emotions Analytics: “by decoding human vocal intonations into their underlying emotions in real-time, Emotions Analytics enables voice-powered devices, apps and solutions to interact with us on a human level, just as humans do.”[1]

How does it work? Beyond Verbal’s software examines 90 different markets in the voice that according to the company, can show loneliness, anger, and even love. The software simply needs a device with a microphone and then measures attributes including, Valence, Arousal, Temper, Mood groups. To date, the software has analyzed 2.3 million recorded voices, across 170 countries and included 21 years of research.

Recently, the company launched a free mobile phone app – Moodies. The app runs continuously and supplies a new emotion analysis every 15-20 seconds. I tested it out while researching for this post and was informed that I was exhibiting feelings of loneliness and seeking new fulfillment, searching warmth. Amusingly, it also stated that I exhibited signs of needing to be right and was ignoring reality — this was shortly after I said that I was skeptical. In a 2-3 minute span, the app proceeded to flash through the following assessments of my speech:

In terms of commercial use cases, the simple Alexa example above is one of many. Others have proposed its use in customer service call centers, executive coaching sessions, and as of late, even to diagnose disease! In fact, the Beyond Verbal team collaborated with the Mayo Clinic to analyze the speech of patients who had coronary artery disease and blocked arteries against those who were healthy to find that there was a distinct factor in the voice that was associated with the likelihood of heart disease! The research analyzed the voices of 88 patients, an additional 9 undergoing other tests, and 21 healthy people. The distinct identified voice factor was found when an individual was 19X more likely to have heart disease. [2][3]

The company has also identified another use case that will help hospitals, health/fitness app developers. They’re hoping to assist individuals understand how changes in a person’s environment may impact a patient. “We could see that people were tired in the morning, became more enthusiastic, active and creative throughout the day, but that their anger levels spiked around lunchtime,” says CEO Yuval Mor. [4] One of Beyond Verbal’s recent funders – Winnovation – mentioned that they could see smart devices continuously monitoring a person’s voice and sending alerts or making emergency calls if a medical problem requiring immediate attention occurred.

Competitors in this emotional analytics space include Swiss startup nViso SA, that developed a prediction process for patients needing tracheal intubation using facial muscle detection and Receptiviti, a Canadian startup that uses linguistics to predict emotions. [5]

Suggestions/Improvements:

A more forgiving industry – Voice recognition has come a long way since the first generation of Siri on Apple and even now, voice recognition is far from perfect. Assessing intonations in voice and attributing them to emotion is an even more herculean task. It’s interesting that the Company has jumped immediately to application in the healthcare space (recall the collaboration with the Mayo Clinic above), an area that arguably demand a high level of diagnostic accuracy. A more forgiving industry might be a good place to start commercializing the technology.

Accuracy across cultures and limited applicability for non-English speakers – As of now, it isn’t clear how the software accounts for cultural nuances in communication. Presumably testing on other languages will help some of this, but research should also be done on English-speakers from a variety of backgrounds and geographies as well. The company has begun to do research and run test on mandarin-speaking persons. Testing on other languages and different cultures will be important to expand the potential user base.

Mechanism for feedback on Moodies – While the mobile app is a step in the right direction to collecting more data points to confirm its research and algorithm, the app currently isn’t optimize for improvements or further learning. Users can’t confirm nor deny the accuracy of the predicted emotions. Moodies also doesn’t allow users to enter in their environmental conditions or what activity they are performing. If the goal is to eventually also use the data to see how changes in a person’s environment affects mood, a way to collect this data through the app ought to exist.

Privacy and Personal Data – Another area for concern and improvement is clarity around privacy concerns, particularly if Beyond Verbal intends to target solutions in the healthcare space. According to Matthew Celuszak, CEO of CrowdEmotion, a competitor to Beyond Verbal “in most countries, emotions are or will be treated as personal data and it is illegal to capture these without notification and/or consent.” Consent is only one area of concern as the laws and regulations around storage and use of personal data create additional burdens to companies. [5]

  1. http://www.beyondverbal.com
  2. http://www.beyondverbal.com/can-your-voice-tell-people-you-are-sick/
  3. http://www.beyondverbal.com/beyond-verbal-wins-frost-sullivans-visionary-innovation-leadership-award-using-vocal-biomarkers-to-detect-health-conditions-using-tone-of-voice-2/
  4. https://blogs.wsj.com/venturecapital/2014/09/18/beyond-verbal-raises-3-3-million-to-read-emotions-in-a-speakers-voice/
  5. http://www.nanalyze.com/2017/04/artificial-intelligence-emotions/
  6. https://thenextweb.com/apps/2014/01/23/beyond-verbal-releases-moodies-standalone-ios-app/#
  7. http://www.mobihealthnews.com/content/emotion-detecting-voice-analytics-company-beyond-verbal-raises-33m
  8. https://techcrunch.com/2016/10/17/science-and-technology-will-make-mental-and-emotional-wellbeing-scalable-accessible-and-cheap/

IntelliTunes – Team Awesome

IntelliTunes – Predictive Analytics for Music Composition

 

The Problem

The music industry is notoriously fickle, and despite attempts to predict the next future chart topper, a significant amount of resources are devoted towards training and nurturing a group of artists with the hope that just a small percentage of them succeed. While this may seem like a challenge, the real challenge lies behind the scenes with the songwriters and musicians that come up with the tunes that we hear on the radio.

 

We believe that the music industry has seen a consistent genre shift every few years, from 70;s disco, to 80’s ballads, 90’s rock, 2000’s hip-hop and today’s Bieber. Given that we know a trend will last for a period of a few years, it would be worthwhile to invest in a system that would be able to predict these trends, and reduce the inefficiencies with song writing. Based on the type of music that’s on the top of the charts, and the historical chart toppers, we’re proposing an AI system that would be able to definitively compose a range of songs that would likely constitute future chart hits. Bieber would just be a mouthpiece for a significantly more intelligent machine.

 

Existing Platforms

There have been a few iterations of the proposed solution so far, but most of them have dealt with a library of past and present music, while IntelliTunes aims to be a predictive model of tomorrow’s music.

The Sony Computer Science Laboratory (“Sony CSL”) was probably the first commercial endeavour made to integrate AI and music composition. By analysing the musicality, tone, pitch and symphony in a range of music that was trending on top charts, the program was able to consolidate and create a unique pop song. It had all the markers from other top-ranking songs of the period, and hence should have also been a hit. However,much like Chef Watson, the result was something akin to serving caviar with peanut butter.

Platforms such as Pandora and Spotify also serve a need in the market by making an assessment of your future listening trends based on predictive analysis of your past music choices. While an astute use of AI, these platforms only serve to match your future listening needs with music that’s available on the market. It does not attempt to create songs that could be personalised for the individual listener.

 

The Proposed Solution

Much like how Chef Watson was the proposed AI solution to the culinary world, we expect IntelliTunes to be the the solution for the music industry. By design, IntelliTunes would be constantly consolidating the movements of songs on the charts, and identifying parameters such as time on the charts, sudden climbs, sudden drops and most importantly,region.

 

The program would utilise Deep Learning, which is particular type of machine learning whereby multiple layers of “neural networks” are programmed to process information between various input and output points – similar to a loose imitation of the human brain’s neural structure.This allows the AI platform to understand and model high-level abstractions in data, such as the patterns in a melody or the features in a person’s face.

The system would then be able to host a virtual library of micro-attributes of musicality, compose songs that would be expected to be desirable in the near future, given the movement and trends of today. An additional interesting application would be the ability to compose unique songs given a particular time period or genre for the listener that would like to create their own personal rendition of Metallica meets 40’s swing. Apps like Spotify have shown that giving users the ability to independently curate their music is a valuable proposition and creates especially sticky customers.

 

One of the great things about the proposed solution is that it would be language agnostic. IntelliTunes would be able to make predictions and compositions for songs across multiple geographies, because all it does it put together the notes to form a melody. Given the melody, a human songwriter will be able to piece in words with the pre-requisite amount of human emotion that an AI would not be able to replicate. An AI may have been able to come up to the tune of Nicki Minaj’s “Anaconda”, but it’s highly unlikely that it could have fathomed the lyrics.

 

Resources:

 

Team Members:

Joseph Gnanapragasm, Cen Qian, Allison Weil &  Rachel Chamberlain

Chord – cut the circle!

The Why:

Ever been wandering around in a huge room full of people who you never met, thinking who to talk to next and how to avoid the crop circles? That ambiguity is a big reason for networking anxiety.

“I hate networking” is something that executives and MBA students alike say often. Networking makes people feel uncomfortable and dirty, as the process seen as exploitative, and inauthentic.

But studies show that networking is a tool essential to success in today’s world. Professionals who network effectively both internally and externally are more successful than their colleagues who do not excel at networking. With that said it is not surprising that there are so many trainings, seminars, and books on how to network better, as well as on how to change people’s attitude towards networking.

Networking is especially needed in fields where people often have no big names on their resumes and need to prove their credibility through personal connections and recommendations, for example, startups.

The What:

We believe there is a way to make networking more pleasant and more effective. Chord is a startup that unites the latest developments in machine learning and crowdsourcing, through our finely tuned algorithms and human touch we create connections that will make the networking something anyone will be able to enjoy.

The How:

The user will first have to create his profile, where we will learn about his goals and his background, not only professional but also interests. This is achieved by either taking a quiz or connecting users social network accounts and supplementing those with additional questions.

The second step, the user will identify the event he will want to attend. This can be done manually, but we also will offer browser extensions that will detect an event when the user sign-ups, through analyzing event pages or calendar entries.

Third step, the machine algorithm will analyze the event and the participating and prepare a short-list of candidates to network with, that list will then be transferred to our human crowdsources after being stripped of all the identifying data, crowdsources will then look at the data and use common sense to identify best opportunities, this information will be fed back to the algorithm, which will use multiple sources to generate final list, as well as potential topics for discussion.

The $$$

The startup has a few options to generate revenues, first, we will be able to charge the participants a small fee per event, depending on the size and complexity. Second, charge event organizers, after creating a large networking effect, hosts will be interested in our services to attract more people to their events, as people will be more willing to go if they know that their networking will be fruitful.

The ???

Chord offers quite a few alterations that might lead to new business models and ways to create value. For example, CREATE networking events based on mutually beneficial connections, those events will be designed in a way that will allow for more deep connections.

It can become a new social network or be used with LinkedIn or similar professional networks to help people who will benefit from connecting. It will be opt-in.

Chord could give events scores based on how ‘fruitful’ the connections are and predict the quality of the event.

It also could be installed in a device that people will have with them and analyze the movement patterns, that will later be used in improving the layouts of different events, as well as used for scientific research.

The BIG ONE:

We require $450,000 to launch the web-app and promote it. $400,000 will be spent on salaries and the websites and the remaining $50,000 on marketing and awareness generation.

The Team:

  • Alexander Aksakov
  • Roman Cherepakha
  • Nargiz Sadigzade
  • Yegor Samusenko
  • Manuk Shirinyan

Researched Links:

https://hbr.org/2016/05/learn-to-love-networking

http://static1.squarespace.com/static/55dcde36e4b0df55a96ab220/t/55e86ab6e4b01fadae0024b5/1441295030249/Casciaro+Gino+Kouchaki+ASQ+2014.pdf

https://arxiv.org/pdf/1610.01790.pdf

http://tim.blog/2015/08/26/how-to-build-a-world-class-network-in-record-time/

Women Communicate Better – Yelp

The Company

Yelp! is a platform for user-published reviewers of local businesses. Yelp!’s human users submit feedback to the website on local businesses with two types of information: a 5-star rating and textual reviews.  

 

The Profile

Yelp! is an active user of prize competitions to better understand and use of the human-inputted data and crowdsource novel approaches to improve its service for users.  The company is currently promoting the 9th iteration of its dataset challenge, with a total of ten prizes amounting to a modest $5,000.

The subset of data includes 11 cities spanning 4 countries (Germany, UK, US, Canada), which means that users have access to over 4M reviews for ~150K businesses for analysis.  For the ninth iteration, Yelp! Is also including 200K user-uploaded photos in the dataset for analysis.

Contrary to the Netflix competition, Yelp! leaves the challenge question and the success metric open-ended to applicants to explore what interests them in the dataset; judging the submissions on technical rigor, the relevance of the results and novelty.  

 

Previous Winners:

One interesting proposal from a winner in the first dataset challenge at University of California-Berkeley used Natural Language Processing (NLP) to extract various subtopics of the individual’s text review. The team used unsupervised machine learning to classify the categories of subtopics that diners mentioned in text reviews; they included areas of interest such as, restaurant service, decor, food quality.  With the subcategories developed by the machine algorithm, the team then could predict for a given review what each subtopic’s rating would be.  

This form of machine learning and natural language processing is helpful to (1) evaluate the accuracy of the star rating given by a user and (2) help small business owners improve their service.  The subtopics approach attempts to not overweight one aspect of a user’s experience into the overall score.

 

Potential for Improvement

  1. Yelp! could improve the value of this subtopic analysis with human input.  By enabling users to assign tags (“ambiance”, “decor”) to a review or, or even better, a 5-star score for each subtopic (as TripAdvisor currently does), Yelp! could enrich its dataset for users and small businesses.  NLP could help suggest tags to users (as with Stack Overflow) from the textual analysis of the provided review or users could input their own.  
  2. One could see this area of subtopics moving in a different direction, as currently, Yelp! reviews are conglomerated for one business that provides multiple services (i.e. a hotel that has a spa and restaurant).  Similarly, Yelp! reviews for a restaurant that serves brunch and dinner are combined into a single ratings score.  This prohibits users from understanding the value the business provides to them for one particular service.    By using NLP and human input on subtopics (e.g. tagging “spa”, “facial”, “brunch”), the user could have a more granular view of the quality of business offering for what the user is trying to achieve. The user could then assess the value of the business based on the service most relevant to their needs, rather than the business as an undifferentiated whole.

https://engineeringblog.yelp.com/2017/01/dataset-round-7-winners-and-announcing-round-9.html

https://www.yelp.com/dataset_challenge

https://www.yelp.com/html/pdf/YelpDatasetChallengeWinner_ImprovingRestaurants.pdf

https://www.ischool.berkeley.edu/news/2013/students-data-analysis-uncovers-hidden-trends-yelp-reviews

Arity – Using Data Analytics to Make Roads Safer

Opportunity

Until recently, the automotive insurance industry used archaic methods to assess driver risk. Driver premiums were based on factors such as geography, age of the driver, whether or not the driver had been in an accident before and the type of car they drive. However, these factors are not good indicators of risk and heavily depend on an event taking place such as an accident. Any accident, leads to a huge cost and payout for an insurance company. The question the industry started to ask was, is there a way to predict risk before the fact and prevent accidents from happening at all?

This led to the advent of Usage Based Insurance. The premise is simple, using sensors to detect driving patterns which indicate risky behavior before a costly event takes place. This helped insurance providers identify risky drivers and charge more accurately based on driver’s risk profiles. The same technology can be used to provide feedback to driver, helping them improve their driving habits, in effect reducing the risk of an accident.

Solution

 

Allstate OBDII Device

Arity recently spun out of Allstate, bringing machine learning and predictive analytics to better predict risk and help drivers understand their driving behavior. The solution initially started with the use of a OBDII based dongle, which once inserted into a car’s diagnostic port, would capture driving information from the vehicle (speed, diagnostics, accelerometer, GPS). Using proprietary models and machine learning, the data from the OBDII device is used to give each driver a risk score. This score determines the drivers likelihood to get into an accident and finally their viability as a customer. In most cases, the insurance company elect to not insure a high risk driver and most good drivers would actually see their premiums decrease.

While the solution using a dongle worked well, it was not cost effective and neither could it be used to gather information about the driver’s behavior. The next stage of this technology has moved to using the driver’s mobile device as a sensor. A driver’s mobile device being used to collect information such as

  • travel speed,
  • Acceleration,
  • Deceleration,
  • cornering speeds,
  • time of day,
  • Usage of the phone (connected over a Bluetooth device, or was the phone physically in the drivers hand)

All of this information is processed through proprietary models and compared to risk data. Arity has access to 21 billion miles and over 85 years of Allstate’s insurance underwriting data which is the baseline to creating accurate risk models. This allows Arity to create a driver risk profile and also provide feedback to the driver directly through the app on their mobile device. 

Effectiveness and Commercial Promise

According to the National Safety Council, cell phone use while driving leads to 1.6 million crashes each year. 1 out of every 4 car accidents in the United States is caused by texting and driving. The estimated economic cost and comprehensive cost caused by phone usage while driving are $61.5 and $209 billion. Arity’s solution monitors a driver’s interaction with a mobile device along with driving behavior, which would deter or notify the consumer when they are distracted.

Insurance companies can leverage this technology to identify risky drivers and help them improve their driving habits. For example, Allstate’s Drivewise application allows policyholders to save up to 3% of their insurance cost when using the app to manage their insurance. After the first 50 trips, the auto insurance holders may earn up to 15% cash back based on their driving behaviors and risk profile. Arity’s solution is particularly important to the highest risk group: teenage and student drivers and can help them be safer on the road.

Ride sharing and commercial fleet customers –Taxi, Bus, Uber and Lyft are also looking for was to better track the performance of their employees and drivers. These companies would be able to manage risk better through the platform. They can qualify drivers based on driving behaviors and retain the safe drivers. High risk drivers also negatively hurt the riding experience of customers and ultimately hurt their brands. By using Arity’s platform, the companies can weed out high risk drivers and thereby lowering their auto insurance cost.

Alterations

Today the algorithms do not take into account external factors. Our recommendation is to include:

  • weather information,
  • Location – proximity of known bars or high risk areas
  • Improve data from mobile phones as the data is not very clean
  • Make use of wearables as additional sensors which can provide insights into driver health information
  • Lastly, partner with OEMs to get access to connected car data which is much more reliable than the data from mobile devices for tracking vehicle movement

Competitors

Given multiple potential applications there is a space in the industry for many companies to operate successfully. Arity is positioned to outperform its competitors today due to the amount of data they can collect through Allstate’s large and diverse customer base. Arity is also making its platform available to other smaller insurance companies and fleet operators which will enable Arity to collect larger amounts of data which in turn will help improve the risk models.

Octo America

  • Primarily partners with government
  • A global firm without strong data and market penetration in US

Cambridge Telematics

  • Primary works with StateFarm insurance
  • The data used for modelling is not as diverse as Arity

Zendrive

  • Primarily targets the B2C segment but monetizes through B2B
  • No current partnership with insurance company

 

Research Links

https://developer.arity.com/driving-engine-sdk

https://www.zendrive.com/how-it-works/

http://www.naic.org/cipr_topics/topic_usage_based_insurance.htm

https://www.arity.com/index.html

https://www.octousa.com

https://www.cmtelematics.com/

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812348

https://www.edgarsnyder.com/car-accident/cause-of-accident/cell-phone/cell-phone-statistics.html

https://www.edgarsnyder.com/car-accident/cause-of-accident/cell-phone/cell-phone-statistics.html

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812348


Team – March and the Machines

Ewelina Thompson, Akkaravuth Kopsombut, Andrew Kerosky, Ashwin Avasarala, Dhruv Chadha, Keenan Johnston

SDR.ai – CJEMS Pitch

The Problem / Opportunity

Sales Development Representatives (SDRs) help companies find and qualify sales leads to generate a sales pipeline for the Account Executives, who then spend their time working with the customer to close a deal. The SDRs are a vital part of the sales process, as they need to weed out people that will not buy to find the ones that likely will, but their work is often repetitive and cyclical. SDRs work with large data sets and follow a clearly defined process, making them ideal candidates to integrate aspects of their jobs with automation. While it is still on the human SDR to understand the pain points of the prospective customer, an opportunity exists to better personalize messaging and make use of the available data to increase the final close rates for sales teams.

Current SDR emails already utilize templates, but they do not take into account what works and what doesn’t, and while it is possible to analyzing open / click rates of emails, linking this to revenue, or even spending time tweaking emails to add extra personalization, detracts from the time SDRs could spend on the phone with customers.

The Solution

SDR.ai aims to solve this problem by creating emails that mimic what actual SDRs sound like, without the template, taking into account the available data on what works vs. what doesn’t. It will integrate with existing popular CRMs, like Salesforce, to learn from previous email exchanges and aggregate data in one place. Messages can be personalized to the recipient in order to create a more authentic message. Additionally, and most importantly, SDR.ai can send so many more messages, increasing the volume of potential leads and the chances of bringing in additional revenue.

After initial training and manual emails, SDR.ai will continue to build smart responses, with the goal of handling everything up except phone calls, including scheduling and even finding the right person for SDRs to email from a prospective company (by using integrations like LinkedIn and Rapportive). Unlike real employees, SDR.ai is online 24/7, thus making it easier to connect with clients abroad, who normally have to take time differences into account, losing valuable time and creating even longer sales cycles.

Pilot & Prototype

To ensure that we are creating a product customers actually want to use, we plan to pilot SDR.ai after an MVP is created in order to gauge early feedback. On average, compensation for an SDR is high, with a base of $46k and an OTE variable comp of around $72k. We can convince companies to be part of our pilot program by showing how we ultimately can either reduce the need for so many SDRs or bring in additional revenue per SDR.

Data Collection

To ensure we collect enough data to make the prototype of the product useful and accurate, we plan to partner with software (SaaS) companies that handle a large volume of leads. Given that this product can be tied directly to revenue generation, companies will likely be willing to try the prototype. From here, we could collect data on the most common language used, tied to deals that have been closed historically. By integrating with popular CRMs like Salesforce that already store historical data and emails used, we can determine how many emails on average it takes before deals are progressed from the SDR to the Account Executive. We also can take things a step further by looking at what is useful across different industry verticals, as CRMs already store this type of information.

Validation

After the pilot runs its course for a month or so (or whatever the average sales cycle length is), we can review the validation of the emails that were created with SDR.ai compared to those that were not. In short, we can validate that emails were (a) more readily responded to by either picking the right person in the organization (i.e. less emails that pass SDRs from one employee in a prospect client to another) and / or due to shortened response times or (b) opened and responded to by analyzing the language used in each response. The language can be continually refined and tweaked based on #2 until SDR.ai finds the right optimization of length, follow up, and personalization.

Team Members

Marjorie Chelius

Cristina Costa

Emma Nagel

Sean Neil

Jay Sathe

Sources

http://blog.persistiq.com/the-rise-of-sales-development?

https://www.salesforce.com/blog/2014/08/what-is-sales-development-gp.html

https://www.saleshacker.com/day-life-sales-development-rep/

https://resources.datanyze.com/blog/preparing-for-future-without-sdrs

 

Pokemon Go: Augmented Reality

The problem:

Virtual reality, for all its advocates has one clearly apparent shortcoming: the absence of integration with daily human behavior. A virtual world is one separate from the one we physically live in. Augmented Reality, on the other hand, uses technology to connect us with real world experiences. A prime example of this is Pokemon Go, which launched the first successful AR-based gaming app in 2016. In this post, we will examine Pokemon Go as a success case for integrating AR into our entertainment seamlessly.

Solution:

Pokemon Go is a game in which characters are overlaid on the real built environment throughout the world, featured in areas of note or importance. The primary purpose of the game is to obtain a list of different pokemon monsters, all of which are located in different places. Along the way you have to recharge, and hunt for monsters that rank in rarity or availability and location.

The game overlays a virtual interface to the real world map on your cellphone and can sense your habits and project different goals and objectives depending on the user.

Effectiveness

The game is highly dependent on user network effects to determine which characters appear in different locations. It does this by running algorithms on desired  or “rare” characters and then featuring them for brief moments in uncommon locations. The users then communicate with each other through the platform to share news of each rare sighting to and eventually congregate in the same area.

Using AR, the game has enabled users to connect to a virtual reality world and incentivises activity which has then been used to relate back to the real world. A clear example of this would be businesses that pay a fee to be listed as a “pokestop” where game players can recharge their lives, and in doing so patronise the store. This has taken marketing to a whole different level and created a separate platform for businesses to target the user demographic that play the game.

The game has also been able to simultaneously overlay reality with virtual reality without the use of any special hardware (like a VR headset or console). In doing so, an artificial environment has been created on the users mobile phone, surpassing the multiple levels of stimulus that you would receive from the real world image presented. This offers the ability for this sort of AR platform to engage users through a myriad outreach mechanisms extremely effectively.

The Future of AR

Pokemon Go is an example of how effective blending the physical and virtual worlds can be in a user experience. This was the first time AR was brought to a mass audience. The rapid adoption and wild success of the game shows that AR is very much technology that’s here to stay, and can be implemented on existing platforms without too much effort. This opens the door to multiple future uses, beyond chasing a yellow cartoon character around town.

References

https://www.nytimes.com/2016/07/12/technology/pokemon-go-brings-augmented-reality-to-a-mass-audience.html?_r=0

https://www.theguardian.com/technology/2016/oct/23/augmented-reality-development-future-smartphone

https://www.recode.net/2016/9/19/12965508/pokemon-go-john-hanke-augmented-virtual-reality-ar-recode-decode-podcast

http://kwhs.wharton.upenn.edu/2016/08/pokemon-go-technology-behind-merging-digital-physical-world/

Team Awesome

Cash Class

The Problem

American education is an expensive endeavor. It is one of the highest categories of government spending, but the system is still struggling to yield strong academic outcomes for all children across socioeconomic class and race. In 2002 federal discretionary funding for education was at $49 million and in 2016 it was at $68 million. There has been a 40% increase in education funding, but yet not a 40% increase in educational improvements. Clearly spending more money does not yield better educational outcomes in the US. This can be seen most dramatically in the case where Mark Zuckerberg donated $100 million to Newark Public Schools in an effort to transform Newark schools in five years; five years later all the money was spent and Newark schools were still an absolute mess. So where is all the money going?

There is little data analysis conducted on where the government directs funds, how schools spend money, and how this spending does or doesn’t correlate with student performance. Current school funding evaluation is at a very high level based on grants or per pupil spending and then very granular with school budgets and audits. There needs to be effective analysis and tracking of the middle spending in public schools from the local to federal level.

 

The Solution

To foster effective spending in public K-12 education that yields academic results for our kids we bring you Cash Class. At the school, district, state, and even federal levels Cash Class will track and analyze spending to discover causational relationships between financial allocation and students achievement to learn what spending patterns tend to be successful. Cash Class will analyze major data sets and intake new data of participants to build personal recommendations of how budgets should be allocated given funding available and the goals of the educational entity. The services of Cash Class would be valuable to help a single principal better allocate their budget or a federal grant office decide where competitive funding is most needed to actually improve academic outcomes.  

There will be two levels of membership in Cash Class. A basic level membership will provide access to insightful trends at the national and state level associated compared with  standardized test performance. The premium membership will provide specialized contracts to pursue personal academic and financial goals of the educational entity.

 

The Design

So how does Cash Class work? Our product would combine internal and external sensors with machine and human algorithms to look at historical financial and academic data, in the effort to find causational relationships between specific budget allocations and academic mobility. As the the machine algorithms find and learn patterns, humans can help add specific context to the locality of a certain school or the directives associated with certain government funding. With this level of analysis the machine can identify key spending patterns in coordination with academic performance to make strategic recommendations for future spending. The more data the school or government entity can provide the better the machine algorithms will learn and the better the recommendations will become.

In order to achieve this we would utilize machine learning to flag spending and student achievement trends across all publicly available data (and private data as available), and use human feedback to confirm the plausibility of correlation. This machine and human data interpretation would be synthesized to generate major budget recommendations along with everyday spending guidance based on the client and their achievement goals. Cash Class could recommend that a school allocate $750 K to staffing and also that spending money on an online reading program may help elementary boys of low socioeconomic status accelerate their reading growth. As time progresses more spending and academic data will be added to system to help machines and humans improve budget recommendations to make better financial and educational choices for the future of America’s children.

 

The Competitive Advantage

There are currently accounting systems that help schools manage everyday cash flow and end of year financial accounting, but nothing that analyzes historical data to help build predictive budget models. Cash Class would be an entirely new level of money management for American education.

Group: Women Communicate Better Than Men

Ngozika Uzoma, David Cramer, Kellie Braam, Chantelle Pires, Emily Shaw

Sources 

http://www.businessinsider.com/mark-zuckerbergs-failed-100-million-donation-to-newark-public-schools-2015-9https://www.theatlantic.com/education/archive/2015/01/where-school-dollars-go-to-waste/384949/https://www.washingtonpost.com/opinions/how-newark-schools-partially-squandered-a-great-prize/2015/10/20/ffff660c-7743-11e5-a958-d889faf561dc_story.html?utm_term=.952d649e0801https://www.usnews.com/news/blogs/data-mine/2016/01/14/federal-education-funding-where-does-the-money-gohttp://www.npr.org/sections/ed/2016/04/25/468157856/can-more-money-fix-americas-schoolshttps://www.gatesnotes.com/Books/Where-Do-School-Funds-Go-Book-Review