Using “Risk-assessment” algorithms to help determine sentencing

 

Opportunity:

 

The Pennsylvania judicial system is one of many state prison systems that have been considering adopting statistically driven tools to determine how much prison time should be sentenced to individuals found guilty of committing crimes. The state spends $2 billion a year on its corrections system — more than 7 percent of the total state budget, up from less than 2 percent 30 years ago. Further, recidivism rates (the tendency of a convicted criminal to re-offend) remain high: 1 in 3 inmates is arrested again or re-incarcerated within a year of being released.  By properly identifying and distinguishing high / medium / low-risk offenders, the system has the opportunity to calibrate its sentencing accuracy to optimize the operations of its correctional facilities. In theory, risk assessment tools could lead to both less incarceration and less crime.

 

Solution:

 

The available risk assessment tools assign points to certain variables (such as age, gender, income, drug use, previous convictions, etc.) that have demonstrated to be strong indicators of criminal behavior in historical data. Social scientists have followed former prisoners and examined the facts of their life and monitored their lives for a number of years to develop an understanding of their propensity for repeated criminal activity. Many court systems use the tools to guide decisions about which prisoners to release on parole, for example, and risk assessments are becoming increasingly popular as a way to help set bail for inmates awaiting trial. This will ultimately help them save on their costs by providing better data-driven judgments towards criminal sentencing.

 

Commercial promise and challenges:

 

 

The main value proposition is that having an algorithm-based component to the judicial decision-making process helps many stakeholders through the value chain-

 

  • Reduced risk of individual bias affecting judgments
  • Increased efficiency reducing trial and bail time. Good for judges and defendants.
  • Reduced costs which will inhibit better allocation of tax-payer money

 

While humans inherently rely on biased personal experience to guide their judgments, empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates or reduce jail populations by up to 42 percent with no increase in crime rates. Importantly, these gains can be made across the board, including for underrepresented groups like Hispanics and African-Americans.

 

On the other side, this approach faces challenges on an individual level especially because the system is based on a probability factored from similar offenders in the past that will influence the offender’s sentence despite that his future could be different or in other words an outlier to the statistics. To minimize these errors we will need to know whether the system will have enough variables to most accurately assess the individual as much as possible and whether these tools will supplement the judge’s decision rather than depend on it. 

 

Competition:

Even though a sizable amount of the agencies and organizations using AI systems in criminal justice reform are governmental bodies, the algorithms and software they use are privately owned. Due to the nascent nature of this industry, competition may either be among private companies trying to develop more efficient and fair algorithms, or there may be competition from an altogether different process, such as a community-based open-source AI project. A report from the Brookings Institute highlights the success of programs such as Google’s Tensorflow and Microsoft’s DMTK as proof.    

 

Proposed alteration:

 

The possible risk-assessment tools should be highly integrable with the existing software and processes in use in the justice system. While companies will want to utilize the ‘black box’ model that allows them to keep their algorithms confidential, it may lead to legal challenges such as in the ‘Loomis v. Wisconsin’ case (Wired). Thus, we would emphasize an open-source based solution with data security being prioritized.

 

Another difficult question in building the model is to tease out factors that are strong indicators in the prediction model without regressing to biases based on race and SES that are prevalent in the current judicial system and are socially deemed unfair.

 

Lastly, these tools could be enhanced by factoring in an inmate’s behavior in jail for their next trial to mitigate mistakes when they happen. If data shows that an inmate will likely not repeat a crime when they show good behavior during their sentence then this will provide us to have a further efficient system that will be as fair as possible while maintaining our goal of reducing costs on correction systems.

Sources:

 

https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing#.wewudvPdi

 

https://www.brookings.edu/blog/techtank/2017/07/20/its-time-for-our-justice-system-to-embrace-artificial-intelligence/

 

https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/

 

https://www.thoughtworks.com/insights/blog/how-artificial-intelligence-transforming-criminal-justice-system

 

https://www.cs.cornell.edu/home/kleinber/w23180.pdf

 

https://www.uaa.alaska.edu/academics/college-of-health/departments/justice-center/alaska-justice-forum/34/3winter2018/a.pretrial-risk-assessment.cshtml

 

Team Members:

Mohammed Alrabiah

Tuneer De

Mikhail Uvarov

Colin Ambler

Lindsay Hanson

Quartet Health

Opportunity

 

About 50% of the US adult population suffers from a physical condition, and a further 33% of those also have a mental illness. However, although there are people who suffer from both, it is often only the physical condition that gets treated. Studies have shown that people with physical health issues (heart failure, for example) and an untreated or undertreated behavioral health issue (such as depression or anxiety) cost 2-3x more for treatment of their physical conditions. Per 2012 data, those patients accounted for almost $300B annually in excess health care spend, mostly attributable to use of medical (as opposed to behavioral) services. Given that this is the total expected cost savings that a potential solution would provide to patients and insurance companies, we believe this is a reasonable estimate of the market size.

 

Solution

 

Quartet Health fixes this problem by bringing data-driven predictive analytics and recommendations to connect untreated or under-treated patients with physical ailments to the correct mental health providers, leading to a comprehensive treatment plan. Quartet accomplishes this by tapping into big data to identify people within a primary care system with undiagnosed or untreated mental health conditions. Quartet Health partners with insurance providers to analyze millions of insurance claims and flags patients with comorbidities or who have not been treated for behavioral health issues; it can combine results from behavioral health screenings with the patient’s data to reveal who might benefit from behavioral healthcare. It also matches those patients with local behavioral health providers who accept their insurance and who can meet both face to face and via telemedicine. Going forward, it then notifies the primary care doctor and follows up to make sure patients keep to their appointments, checks clinical results, and calculates the cost of care.

Effectiveness

 

Since hospital systems and insurers pay for Quartet’s platform, it’s free to use for patients as well as primary-care doctors and behavioral health specialists and hence using Quartet’s system lowers costs. Through partnerships with insurers and healthcare systems, health plans pay per member, and compensation is tied to quality of care and cost reduction. The shift from fee-for-service to value-based payment rewards providers for patient outcomes, compelling them to provide superior holistic treatment.

 

This also positively impacts patients’ lives because their treatment will be more comprehensive, effectively targeting both the physical and mental conditions, enabling them to live their lives more comfortably without need for return hospital visits and unnecessary expensive bills. The tool will allow doctors to proactively diagnose mental issues before they become exacerbated, leading to a) effective treatment of the mental condition and b) positive side effects in assisting the recovery of the source physical issue.

 

Competitive Landscape

 

Although the use of augmented judgment in diagnosing mental illnesses has a lot of potential, it is still in its early stages. For instance, NeuroLex is pioneering a computer model that predicts the onset of diseases, such as psychosis and schizophrenia, by collecting and analyzing patients’ speech samples for patterns (i.e. pauses in the words, use of determiners, etc.) that can be indicative of each disease. Other companies that operate in this space include New York-based AbleTo, which announced a $36.6 million raise for its behavioral health platform, U.K.-based Ieso Digital Health, which raised $24 million for a platform that offers psychological therapies and cognitive behavioral therapy, and Talkspace, which has raised $59 million to offer on-demand virtual therapy via instant messaging. Related to this are meditation platforms such as Headspace, which recently closed a $36.7 million round, while Y Combinator alum Simple Habit raised a small $2.5 million round to help the world destress.

 

To the best of our knowledge, however, the only other company taking a similar approach in using predictive analytics at scale to better align patient, provider, and insurance outcomes is Clover Health. We feel Quartet Health can compete with Clover because of its exclusive focus on behavioral health, as opposed to Clover, whose stated goal is overhauling the entire health insurance industry using big data.

 

Suggestions/Improvements

 

Quartet Health’s ability to create and extract value is tightly coupled to both the quality of its predictions and the quality of behavioral healthcare that patients receive. If they can improve their models’ precision without decreasing recall, they will identify more patients for whom they can improve care while reducing health providers’ costs. If they can improve health outcomes for identified patients, say by improving its matching to the right healthcare providers or by increasing patient compliance, this will further extend those benefits. To improve their predictions, Quartet Health should refine its algorithm by: (1) collecting more data on patients with confirmed physical and mental illnesses (so that causal patterns which would otherwise go unnoticed are identified) and (2) increase both the number of data points and types of data collected for each “potential” patient. Besides hospital records and current symptoms, studies have shown that an individual’s social media activity can predict suicidal intent or indicate mental illnesses. As such, sentiments, online behavior and notable changes in the way someone interacts with peers can be additional data points a patient can choose to provide to aid in diagnosis. The risk is that patients may feel their privacy has been compromised. To mitigate this, Quartet Health could first solicit doctor and patient approval, primarily monitor public activity, and only connect directly with the peers/friend-group of high-risk individuals.

 

Additionally, to improve patient outcomes, Quartet Health should incorporate more feedback loops so that it can provide recommendations based on what patients in the same age group and with similar conditions and medical records, found to be effective. Ways to do this include:

 

  • App-Doctor: The patient can answer questions and post daily logs on mental progress, and the application can give further instructions on what to do based on these inputs, ideally forgoing the need for a doctor in less-critical cases.
  • Patient-driven questionnaires: Have patients answer questionnaires when they are with their primary care doctors, and then use answers to immediately match patients to specific mental health professionals, drug treatments, etc. This has the added benefit of gathering data directly from patients.

 

Sources

https://www.quartethealth.com/

http://www.modernhealthcare.com/article/20171125/TRANSFORMATION03/171119895

https://www.forbes.com/sites/zinamoukheiber/2015/10/25/patrick-kennedy-backs-quartet-health-as-startups-in-mental-health-are-suddenly-hot/#127ea65b62fc

http://www.mobihealthnews.com/content/quartet-sutter-health-use-big-data-get-patients-mental-healthcare-they-need

https://www.quartethealth.com/blog/price-wrong-physical-costs-behavioral-health-issues

https://www.theatlantic.com/health/archive/2016/08/could-artificial-intelligence-improve-psychiatry/496964/

https://www.neurolex.ai/

https://www.linkedin.com/pulse/its-time-tech-put-human-touch-back-healthcare-arun-gupta/

http://www.healthcareitnews.com/news/sutter-health-quartet-health-partner-mental-health-coordination

https://venturebeat.com/2018/01/03/quartet-raises-40-million-to-bridge-the-physical-and-mental-health-care-divide/

https://www.healthcare-informatics.com/article/mobile/and-comers-2017-quartet-health-s-venture-behavioral-health

https://www.crunchbase.com/organization/talkspace

https://www.crunchbase.com/organization/clover-health

https://www.fastcompany.com/40513289/quartet-health-just-raised-40-million-to-expand-its-healthcare-platforms

https://techcrunch.com/2017/11/27/facebook-ai-suicide-prevention/

 

Team

Siddhant Dube

Eileen Feng

Nathan Stornetta

Tiffany Ho

Christina Xiong

Orbital Insight: Global Satellite Image Processing

 

  • Opportunity

 

The recent increase in the number of satellites, providing multi-spectral, full-earth coverage at rapidly decreasing cost, provides significant opportunity to achieve business insights through the analysis of real-time or near real-time high-resolution satellite imagery. When coupled with significant increases in computing power and cloud computing networking, companies can use machine learning algorithms to interpret, analyze, and process imagery data to solve business problems across a large number of applications. At a simplistic level, “picture-takers,” or companies that operate a constellation of satellites, capture images in multiple spectrums (visible, infrared, radar, etc) of the earth at high resolution (down to 30 cm) for further processing. Next, a “sense-making” company accesses the data and analyzes it by comparing different images of the same object, aggregated across multiple objects and periods of time, in order to provide business insights for the given application. One of the leading companies in analyzing this satellite imagery is Orbital Insight.

 

 

  • Summary of Solution

 

While some satellite companies also perform the subsequent analysis, Orbital Insight is purely a “sense-maker.”  They are experts in getting useful aggregate information out of the satellite images. In order to do this, they employ proprietary machine learning and computer vision algorithms to observe changes in specific activities or resources which are visible in or can be deduced from the satellite images.  The algorithms are trained to identify specific features in the images, like cars in a parking lot, crops in a field, or oil tankers at sea, and quantify them. In doing so, Orbital Insight can preempt commodity trends or estimate non-public information regarding a company’s performance. They partner with several key satellite constellation owners to gain access to higher quality, more complete, or more frequently updated satellite imagery.

 

 

  • Evaluate Effectiveness / Commercial Premise

 

Orbital Insight solves business problems through both industry and product-specific channels. While Orbital Insight currently does the majority of its business through product-specific channels, analysis gleaned from satellite imagery can be applied to almost every industry where insight is possible through monitoring global trends or competitor performance. Among others, notable consumers of Orbital Insight’s analysis include retail, energy, financial services, agriculture, insurance, and government. By using satellite imagery, for example, Orbital Insight is able to forecast U.S. corn and soy production with predictive crop yield analytics through merging rich satellite data, weather and historical data in real time to provide investment-grade insights to a variety of clients including hedge funds, asset managers and financial data service providers before commercial and government statistics are available. In another application, Orbital Insight used satellite imaging in the aftermath of Hurricane Harvey to refine their model predicting flooding for their insurance company clients. Orbital Insight’s use of machine learning and data analytics applied to satellite imaging can provide customers with both valuable investment insights and business solutions.

 

  • Competitive Landscape

 

As mentioned previously, the competitive landscape in the satellite imaging market is largely divided between “picture-takers” and “sense-makers,” and some companies try to span both business models. As a result, Orbital Insight competes against both imaging companies that perform analysis (e.g. Planet Labs) and companies with no satellites who focus solely on analysis (eg, Descartes Labs). While the satellite imaging industry (i.e. the “picture-takers”) is projected to be a $6.8B industry by 2023, the satellite imagery analysis industry in which Orbital Insight competes is less mature and composed largely of private companies, spanning industries that produce trillions of dollars in annual revenues. There are very few barriers to entry for imagery analysis, and thus Orbital Insights achieves its competitive advantage through partnerships with a number of the “picture-takers,” where the quality of analysis produced from its algorithms is higher due to Orbital Insight having access to more timely images sourced from different data streams (i.e. multi-spectral), and its leading proprietary machine learning algorithms. While Orbital Insight is a private company that was founded very recently, with a valuation of approximately $230M, DigitalGlobe, a leading public imaging and analysis company, provides an idea of the size of potential revenues, achieving $725M in revenue in 2016.

 

 

  • Proposed Alterations to Increase Value

 

Orbital Insight has a strong business model with a clear value proposition, but operates in a competitive field with low barriers to entry.  They are entirely at the mercy of the “picture-takers” and are squeezed at both sides of the stack with little likelihood to succeed through vertical integration.  In order to succeed in the space, they should differentiate themselves on algorithm performance and take measures to protect their methods. If they are known as the player that can most accurately predict commodity trends and extract the most value from satellite images, then they will have a place in the ecosystem. Since the software patent landscape has varying levels of effectiveness based on the applicable country, is complicated, and in a state of flux, Orbital Insight should be more proactive than the average company in protecting its IP. Lastly, in order to not be beholden to any single upstream firm which may itself try to vertically integrate, they should also make efforts to expand the number of partners in the satellite imaging industry.  This will ensure that satellite imagery continues to be a widely available product, allowing companies like Orbital Insight to benefit from an ever-expanding amount of data.

 

Sources

Newsweek

http://www.newsweek.com/2016/09/16/why-satellite-imaging-next-big-thing-496443.html

AI Applications for Satellite Imagery and Satellite Data

https://www.techemergence.com/ai-applications-for-satellite-imagery-and-data/

How AI Could (Really) Enhance Images from Space

https://www.wired.com/story/how-ai-could-really-enhance-images-from-space/

Global Commercial Satellite Imaging Market Size, Share, Development, Growth and Demand Forecast to 2023 – Industry Insights by Application, and by End-User

https://www.researchandmarkets.com/research/dq6sc4/global_commercial

Orbital Insight Sees the Big Picture with AI

https://www.nanalyze.com/2017/01/orbital-insight-artificial-intelligence/

How Orbital Insight Measured Hurricane Harvey’s Flooding Through the Clouds

https://www.forbes.com/sites/alexknapp/2017/09/26/how-orbital-insight-measured-hurricane-harveys-flooding-through-the-clouds/#5a7fe27b676c

DigitalGlobe Form 10-K

https://www.sec.gov/Archives/edgar/data/1208208/000155837017001064/dgi-20161231x10k.htm

Patent Protection for Software-implemented Inventions

http://www.wipo.int/wipo_magazine/en/2017/01/article_0002.html

 

Team Members

Thomas DeSouza, Matthew Nadherny, Patrick Rice, Samuel Spletzer

Recursion Pharmaceuticals (Augmented Perception – Profile) Post

Recursion Pharmaceuticals uses augmented perception to speed up the initial stages of drug discovery.

 The problem. Matching thousands of drugs to thousands of diseases:

There are thousands of rare diseases and thousands of FDA approved drugs that may have a positive impact on those diseases, but those drugs, both singly and in combination, on the rare diseases is time-consuming. Rare diseases are diseases that affect less than 200,000 people.  Pharma companies are less likely to pursue those smaller markets. Despite the small market of any single disease, it’s estimated that rare diseases affect 10% of Americans. If Recursion’s platform can help access this large market of 10% of Americans, it could be massively valuable.

Recursion was founded by an experiment to find a drug that impacts a rare disease which weak blood vessels lead blood into the brain and cause strokes. The experiment applied 2,000 different drugs to a diseased cell sample. Then, a pair of cell biologists looked at the 2,000 experiments to evaluated the phenotypic impact – or how the drugs appeared upon visual inspection to impact the cell sample. A computer algorithm developed by Anne Carpenter’s group at the Broad Institute also looked at the images from the experiments. The human team selected 39 drugs that appeared to have a positive impact on the diseased cells.

Here’s the interesting par: the computer program also selected 39 drugs – but the computer-selected and human-selected sets didn’t overlap at all. The computer selected 39 different drugs than the people did.

And, after closer study, only one of the people-selected drugs continued to appear to have an impact while 7 of the computer-selected drugs were impactful enough to merit further study.

The solution. Recursion’s automated phenotypic drug discovery platform:

Phenotypic drug discovery involves testing a drug in vitro (in a test tube) by applying a drug compound to a diseased cell sample. Researchers can judge the impact of the test by observing phenotypic changes in the cell sample.

Recursion’s platform has automated microscopes that sends thousands of images of in vitro tests each week to image recognition software. The software, armed with image data of healthy tissue, looks at the images and determines if the tested drug makes the cells look healthier.

Initial results. Recursion has made significant progress toward their goal of treating 100 genetic diseases by 2025:

Thus far, Recursion has identified promising compounds for 34 different rare diseases. Seven have progressed to in vivo tests and two are nearing applying to enter FDA trials. Their success has generated investor interest; Recursion has raised nearly $80M since February 2017.

 

 

Critique: the pharma value chain is challenging: 

The main issue that I can see with Recursion’s platform is that, because of the nature of the value chain in drug discovery, it fails to extract much value. Simply identifying that an approved drug has in vitro phenotypic impacts on a cell sample isn’t worth much. From there, millions of dollars must be spent to test, optimize, and characterize the lead compound in animal models. Then the drug must be tested in people in FDA clinical trials, which takes many years and many $Ms.

Recursion simply produces the early lead. Now, that drug is FDA-approved, so safety tests would often (but not always) less intensive. And, many pharma companies that own these drugs just have them sitting on the shelf, not being used or sold. So it could be a compelling tool for large pharma companies that have patented compounds that are failing to generate good data in the clinic – sort of a second-life option for those drugs.

 

But the dataset could be valuable in the long run:

However, Recursion also seems focused on generating a massive dataset of cellular models, which grows by 20 TB each week. This data could be a valuable tool for a large pharma company that develops its own drugs, such as Recursion partner, Sanofi. I would suggest focusing on collecting that data. Perhaps the company could develop a portable version of their platform and partner with CROs (contract research organizations) to collect data from clinical trials.

 

However, in the long run, competition from AI drug design companies may be problematic:

One general category of competitors are general AI-enabled drug development companies, such as Insilico Medicine and Atomwise. These companies promise to perform rational design of drugs from computer models. However, they haven’t yet produced compelling results.

 

Sources

Recursion: https://www.recursionpharma.com/

NIH profile: https://sbir.nih.gov/statistics/success-stories/recursion

NIH rare diseases: https://rarediseases.info.nih.gov/diseases

Anne Carpenter’s group at the Broad: https://personal.broadinstitute.org/anne/

Investor blog post about Recursion: https://medium.com/@CRVVC/recursion-pharmaceuticals-ai-enhanced-drug-discovery-fdb8d7aad64c

Tech Crunch article about $60M Series B: https://techcrunch.com/2017/10/03/drug-discovery-startup-recursion-raises-60-million-in-series-b-from-dcvc/

FierceBiotech profile: https://www.fiercebiotech.com/special-report/recursion-pharmaceuticals

 

 

Team Members: Brentt Baltimore, Moises Numa, Corey Ritter, Mitchell Stubbs

Augmented Bling

Opportunity:

The US Luxury goods market is $85B and growing slowly, worldwide the market was $249B in 2016. The luxury goods industry has seen a recent foray into augmented intelligence with the advent of the Apple watch. At the Basel Fair in March, de Grisogono, one of the biggest jewelers in the world launched a product-driven chatbot, that guides users into selecting various types of jewelry (rings and pendants in this case). The chatbot first introduces itself and then compliments the customer, finally proceeding to ask questions about the customer’s taste and using augmented intelligence eventually offering a choice of jewelry to buy.  The industry in general is facing a decline with big players like Tiffany & Co. seeing declining sales and profits in the last two years. As Millennial tastes move away from jewelry and traditional diamonds, companies like De Beers are focusing on ways to improve customer experience. This brings us to our solutions to improve this market.

We believe a potential enhancement to this industry would be an app to virtually try on jewelry. Data-driven marketing utilizing customer CRM data and on-site browsing habits would ensure customers tastes are met in the best manner. WIth the help of the AI demonstration and experience, retailers such as Tiffany & Co could potentially decrease the size of their brick and mortar stores, reducing rent costs.

Effectiveness and commercial promise:

This strategy seems like it will pay off in a large way. According to statistics by McKinsey, online sales of luxury goods have been increasing relative to overall sales, showing an increasing willingness to make large purchases virtually. By creating augmented displays, these companies decrease the time and effort required for a consumer to “try on” their product and increase the variety of available options to test out with smaller operational costs. These tools could be easily scaled with little additional marginal cost.

One limitation to this effort’s success is the ease of reproducibility. Because other competitors in the space can easily copy any successful initiatives, it may not serve as a strong differentiator for any one firm. However, it should drive further sales industry wide by lowering the cost of creating an endowment effect by showing the potential customer what it would look like if they wore this particular jewelry.

As the chatbot and the photo get information on customer preference, the company can further personalize the offerings. In this case, the phone acts as a sensor, returning information on how long users interact with the app and how many options they are choosing between. Information from photos that users take with the app would sense information as well. For example, the app could suggest different jewelry for different outfits. Chatbots are an easy win for luxury retail. The more data is gathered by the chatbot, the more personalized the price can become too, allowing for capture of the maximum consumer surplus.

Chatbots are replacing human customer service through online chat. The more bot use, lead to decreased cost and higher service value for the customer. For a concierge, the efforts to ask many questions, the time it takes to analyze it and the potential for error in recommending a new product are all costly. Whereas once a chatbot is programmed well, the cost is very low and the service is available 24/7/365.

The chatbot  will assist in gathering consumer preferences and then the virtual “fitting room” will be used to determine whether the customer is satisfied with the item. There is significant effort in the online retail space to develop the technology needed for an accurate virtual fitting room. If the experience is not accurate, it could adversely affect the credibility of the retailer and its technology. Amazon has introduced the Echo Look style assistant, which has received positive reviews.

The virtual stylist is being applied to fashion, there is an opportunity to bring it specifically to the jewelry space. The company GRANI is an early adopter of the virtual jewelry fitting room space. The design will be refined, allowing users to try jewelry with different outfits/ hairstyles, improving the image quality so the exact cut and quality of the jewelry is apparent.

LINKS:

https://www.wsj.com/articles/tiffany-hunts-for-path-to-regain-cool-1499621248

https://thinkmobiles.com/blog/augmented-reality-jewelry/

https://www.fool.com/investing/2018/03/17/why-tiffany-co-stock-dropped-on-friday.aspx

https://www.mckinsey.com/industries/retail/our-insights/luxury-shopping-in-the-digital-age

https://www.ft.com/content/1c2a6b24-a514-11e7-8d56-98a09be71849

Market size: https://www.luxurysociety.com/en/articles/2017/07/us-luxury-goods-market-sees-another-year-slow-growth/

http://www.bain.com/publications/articles/luxury-goods-worldwide-market-study-fall-winter-2016.aspx

Personalized Pricing http://review.chicagobooth.edu/marketing/2018/article/are-you-ready-personalized-pricing

Analogy to Chatbots: https://chatbotsmagazine.com/3-high-value-chatbots-types-and-1-you-need-to-fire-immediately-49832901fe8a

https://www.econsultancy.com/blog/66058-fashion-ecommerce-are-virtual-fitting-rooms-the-silver-bullet

https://www.prnewswire.com/news-releases/facecake-releases-first-online-mobile-and-in-store-augmented-reality-shopping-platform-for-jewelry-at-nrf-2018-300583203.html

https://www.retaildetail.eu/en/news/mode/amazon-brings-virtual-fitting-rooms-your-home

https://www.trendhunter.com/trends/try-on-jewelry-pieces

Team Members:

Jess Goldberg

Anu Mohan

Louis Ernst

Pranav Himatsingka

Andrew Herrera

Photography Fix: Focus Pocus

The NextGen Solution to Your Perfect Photo Needs

The Problem / Opportunity

The US photography market has $10 billion in annual revenue. A number of startups and large technology companies have aimed to improve both professionals and amateurs’ photographs (especially those taken for social media), principally by focusing on post-production services, achieving valuations in the hundreds of millions. To our knowledge no technology company has focused on the pre-production photography component, leaving significant  open space for our company.

According to a recent National Geographic’s 50 Greatest Pictures issue, “a photographer shoots 20,000 to 60,000 images on assignment. Of those, perhaps a dozen will see the published light of day”. Photography is an art that depends on a number of factors – timing, weather, sun exposure, angle, and more – all of which lend to the unfortunately ephemeral nature of the perfect snapshot. This problem creates a great opportunity for a tool that can decrease the amount of time, energy, and planning needed to capture the optimal image.

 

Solution & Data Strategy

Focus Pocus creates a solution that can track, identify, and predict the best locations for a photo, enabling casual and professional photographers to see where they should navigate to in order to capture their ideal shot. This solution will be made available as a downloadable app on the user’s phone. In future iterations, Focus Pocus may be installed natively into wifi-enabled cameras.

Focus Pocus solves the problem of finding and taking the ideal shot by crowdsourcing the best possible photograph locations and conditions, relying on large amounts of publicly available data combined with sensor data (from users’ cameras/input) and guiding the user through the photo setup process.  

First, Focus Pocus will integrate with photo-sharing platforms like Instagram, Flickr, Google Photos, and 500px to identify publicly available photographs that are either (1) highly popular or (2) of a high quality for the area you are located. Highly popular photographs on social media can be measured by how frequently each photo is clicked, shared, or liked. High quality photos can be identified using deep learning photo-scoring algorithms that can identify good photographs based on characteristics such as clarity, uniqueness, color, etc.

Next, Focus Pocus will identify which ideal shots are available to you and what settings or angles you need to use in order to achieve them, based on your camera type, time of day, lighting conditions, and other user-specific data.

Most of this data is available; photographs taken using non-phone cameras will typically contain large amounts of technical metadata under the Exchangeable Image File Format which includes the following:

  • Date and time taken
  • Image name, size, and resolution
  • Camera name, aperture, exposure time, focal length, and ISO
  • Location data, lat/long, weather conditions, and map

Over time, after initial training and being provided with photo datasets, Focus Pocus can continue to map out the entire city, with the goal of handling everything from recommending tourists photo spots to internally setting up the camera with the right specs and using AR to position the camera at the right height and distance from the subject. Tracking sunlight and weather conditions by utilizing training data to identify the best locations using current time and conditions can also be developed (by using integrations like LinkedIn and Rapportive).

 

Pilot & Prototype

The project lends itself to piloting at trivial cost in a single city before scaling to other cities. As a pilot, we would set up sensors (light and weather sensors, cameras, etc.) at a handful of highly-trafficked photo locations in Chicago, determined by assessing geospatial photo density using a service like TwiMap or InstMap. Early candidates would be the Bean, Navy Pier, Millennium Park, and Willis Tower. Focusing on these locations initially would also make it easier to market the product with concentrated advertising or founding employees giving demonstrations on-site.

From there we would develop a simple mobile photography app for users that would be used to recommend ideal photo locations and also enforce ideal camera settings within the app for a location based not only on the phone’s sensors, but also from our more refined on-site sensors.

 

Validation

To ensure that our solution meets the objectives of identifying the best locations and conditions for photographs, we could evaluate the predictive power of sensors by benchmarking photographs taken using Focus Pocus against those without. We could also measure the following success metrics:

  • Number of Downloads/Installations, Retention Rates, Usage Per Member

In validating market need, we’ve also done research on applications similar to Focus Pocus and have found that businesses, such as Yelp and Flickr have already used deep learning to build photo-scoring models. For instance, by assessing factors, such as depth of field, focus and alignment, Yelp is able to select the best photos for its partner restaurants. However, this use case is after-the-fact (i.e. after the photo has already been taken), whereas the solution we propose allows for actions to be taken to optimize photo quality before it’s taken.

 

Sources

https://engineeringblog.yelp.com/2016/11/finding-beautiful-yelp-photos-using-deep-learning.html

https://digital-photography-school.com/1000-shots-a-day-the-national-geographic-photographer/

Jin, Xin, et al. “Deep image aesthetics classification using inception modules and fine-tuning connected layer.” Wireless Communications & Signal Processing (WCSP), 2016 8th International Conference on. IEEE, 2016.

Aiello, Luca Maria, Rossano Schifanella, Miriam Redi, Stacey Svetlichnaya, Frank Liu, and Simon Osindero. “Beautiful and damned. Combined effect of content quality and social ties on user engagement.” IEEE Transactions on Knowledge and Data Engineering 29, no. 12 (2017): 2682-2695.

Datta, Ritendra, and James Z. Wang. “ACQUINE: aesthetic quality inference engine-real-time automatic rating of photo aesthetics.” In Proceedings of the international conference on Multimedia information retrieval, pp. 421-424. ACM, 2010.

https://www.crunchbase.com/organization/magisto

https://www.crunchbase.com/organization/animoto

https://hackernoon.com/hacking-a-25-iot-camera-to-do-more-than-its-worth-41a8d4dc805c

https://twimap.com/

https://instmap.com

 

Team Members

Siddhant Dube

Eileen Feng

Nathan Stornetta

Tiffany Ho

Christina Xiong

Descartes Labs

Opportunity:

Descartes Labs is a startup founded in New Mexico in 2014 that is building data refinery for satellite imagery to better understand the planet. Currently, even with so much data and imaging of the plant available, it is difficult for companies and government agencies to successfully predict crop yields and potential shortages. This poses a challenge when preparing in advance for the changes year over year and increases the concern for climate change and food scarcity around the globe. Descartes claims that it can accurately predict crop yields, beating out the accuracy of the US Department of Agriculture, which is currently the only alternative for information.  

Solution:

Descartes uses the increase in the availability of large data sets accumulated by the increase in shrinking and cheaper sensors, as well as the rise in popularity of nanosatellites to determine how healthy the corn crop is on the planet from space. The company uses spectral information (non visible to the human eye) to measure chlorophyll to make these predictions and analyzes satellite data of every single farm in the US on a daily basis to update its predictions and deliver local estimates. 

Effectiveness, Commercial Promise, and Competition:

In terms of effectiveness, Descartes Labs states that it “can predict the yield of America’s 3 million square kilometers of cornfields with 99% accuracy.” Additionally, in 2015, the predictions made by Descartes beat those of the United States Department of Agriculture by 1% and the algorithms of the company continue to improve year over year.

Descartes Labs presents an opportunity for a wide range of groups, including corporations, government leaders, and humanitarian groups. For example, Cargill, an agricultural conglomerate, is a customer of and investor in Descartes Labs. The technology likely helps Cargill understand crop yields for a given year. Descartes Labs also received a grant of $1.5 million from the U.S. Defense Advanced Research Projects Agency, which uses the technology to anticipate food shortages, and thereby predict areas of sociopolitical conflict, in the Middle East and North Africa.  

Another application of the technology is disease forecasting and prevention. The high resolution private and public satellite data can help identify high risk environments such as areas with stagnant water conducive to mosquito proliferation. Those leads can be combined with medical and social media data to predict and backtest the spread of diseases. Such information will be valuable for epidemiologists and local governments.  

Several competitors include Orbital Insights, Gro Intelligence, and Tellus Labs. Orbital Insights covers a much wider range of industries – for example, it can help retail companies understand vehicle counts and traffic monitoring.

Suggestions / Improvements:

To improve, Descartes could utilize it current data and connect with various sensors on other devices, to triangulate the information it has and make more accurate predictions. This is a direction that CEO and Co-Founder, Mark Johnson, wants to go, given the vast amount of “potential sensor data we’ll be getting from combines, tractors, cars, boats, barges, trains, ships, grain silo. Everything is going to have sensors on it, so making sense of all that data is the sort of challenge we’re aiming toward” (Mark Johnson in Fast Company).

Descartes can also explore ways in which their data and capabilities can benefit individual farmers in addition to commercial clients such as Cargill. Descartes can partner with NGOs, consultants, and local governments to enable subsistence farmers with its data and technology.  

Another potential application is to tackle wildfires. Descartes can combine weather, geo imaging, and historical data from previous wildfires to identify high risk areas and potentially suggest effective ways for wildfire suppression once fires break out.

Sources:

https://www.descarteslabs.com/

https://www.fastcompany.com/40406046/this-startup-is-building-a-fitness-tracker-for-the-planet

https://medium.com/@thephilboyer/announcing-our-investment-in-descartes-labs-9dca8257d0d9

https://www.forbes.com/sites/themixingbowl/2017/09/05/can-artificial-intelligence-help-feed-the-world/#110f4bd346db

https://blog.nationalgeographic.org/2018/02/21/forecasting-diseases-one-image-at-a-time/

https://venturebeat.com/2017/08/24/descartes-labs-raises-30-million-to-better-understand-earth-with-ai/

https://www.theverge.com/2016/8/4/12369494/descartes-artificial-intelligence-crop-predictions-usda

Clients can integrate their data with the Descartes Platform to create their own solutions, models, and forecasts:

Team Members:

 

Sam Steiny

Rosie Newman

Gergana Kostadinova

Javier Rodriguez

Learn and track your progress as a guitar enthusiast

Opportunity:

 

Fender Musical Instruments Corporation is a manufacturer of guitars, basses, amplifiers and auxiliary equipment. They target all levels of players from beginners to experts, across musical genres. The opportunity has arisen from their struggles in retaining their customers since many initial learners drop out due to the difficulty of subject and no clear direction of learning. They realized that a person who goes past the first year of training buys equipment worth thousands of dollars over their lifetime (Business Insider). Thus, the opportunity is to use digital tools and augmented intelligence to retain customers.

 

Solution:

Fender utilized sensors on their devices, big data analytics and cloud computing to launch Fender Play, a digital service that could generate revenue. The basic advantages to their solution can be broken down into several points:

  • Track users’ progress, and give them access to systematic modules across a variety of ‘paths’ so that they can learn according to their own style. This solution is also driven by a demographic shift where the majority of novice guitar players are trying to learn songs that they like, not necessarily learn techniques emphasized in traditional classes or play in a band.
  • In addition to providing users the ability to skip certain lessons, allow them to go at their own pace to save time and money compared to traditional instructors.
  •  Inter-device connectivity (guitars, amps, phone), with automatic updates and reminders, causing higher brand loyalty due to Fender owning the entire stack of guitar-related services.

 

Commercial promise:

Promise #1: Introducing a digital interface between a player and a guitar by adding sensors is a key enabler of the solution. Other musical instruments have interfaces that are easier to digitize (electric pianos). Digital pianos created a niche and made self-learning easier. We can expect a similar effect with digital guitars. An integration with social media and smartphones creates a collaboration platform that may, for example, allow remote band recitals, further reducing quitting rates.

 

Promise #2: Introduction of the smart guitar may do to a traditional guitar what electric piano did to an “acoustic” piano: displace it. Fender can benefit from a first mover advantage if the guitar market shifts towards digital.

 

Promise #3: Fender has considered the instructors segment by creating a tool that allows them to monitor and follow-up with students post-lesson during at-home practice.  By doing so, they are showing that the platform was built to be accepted by all customers in the market without exclusion or resistance.

Because of the lack of publicly available data, we have found it difficult to directly measure the effectiveness of the program. Thus, we cannot predict the potential revenue and growth from this segment.

Competition

 

Fender has traditionally competed with large instrument and especially guitar manufacturers. Gibson is the most prominent competitor in this space. Fender could change their business model and reposition themselves as a digital services company, they would find themselves going against competitors in the digital content industry (e.g popular channels on Youtube) or even community and collaboration-based music platforms such as Soundcloud or Bandcamp if it allows users to post their own music on FenderPlay. These competitors might be equipped with much better analytic tools and capabilities, but there might be certain niche markets where Fender has better brand loyalty and can attract people (guitar-heavy music like rock).

 

Proposed alteration   

 

Fender Play is priced at $19.99 a month for beginners. While this subscription price might seem reasonable, offering a free trial for customers who have purchased a Fender guitar would attract them to subscribe by trying it out before spending their money.

Fender should look into exploring the data collected and observe which lessons have users actually found helpful based on their level. By looking back at the data and observing which patterns were most likely for their users to stay as a loyal customer, Fender should then create clusters based on the data collected of each segment and create suggestions that were most likely to be helpful based on historical data and correlations. This strategy should start with casual users who typically dropped guitar lessons as part of their frustration because it narrows the suite of offerings while the digital ecosystem is built. With this increased intelligence and cloud storage capabilities, they might then be able to provide more personalized offerings in their ‘paths’.

A different strategy would be to link to popular music-tech companies like Spotify or Soundcloud to approach the target market that learns guitar to create their own music or collaborate with other artists.

Lastly, in order to keep their customers engaged so that they do not give up, Fender might incorporate a competition element into the platform, in which users would be able to see a scoreboard with their rankings against other peers. If there are rewards, badges, levels and an ability to compare progress with peers, people may have an increased incentive to stay involved than otherwise.

 

Sources:

 

https://blogs.wsj.com/cio/2017/07/25/the-morning-download-fender-launches-internet-of-the-guitar/

 

https://blogs.wsj.com/cio/2017/07/24/fender-amps-up-its-digital-play/

 

https://www.reuters.com/article/us-fender-musical-software/electric-guitar-maker-fender-jumps-into-online-learning-idUSKBN19R145

 

http://www.businessinsider.com/fender-play-review-2017-10

 

Team Members:

Mohammed Alrabiah

Tuneer De

Mikhail Uvarov

Colin Ambler

Lindsay Hanson

Optimizing your Garbage Truck with Big Data

Opportunity

 

The multi-billion dollar residential waste management “industry” (both public and private entities) presents a substantial opportunity for innovation through the application of data-based solutions to collect waste more efficiently and improve utilization of waste collection resources.  Significant, yet variable, resource inputs (trucks, labor, and fuel being most directly relevant) offer direct cost savings for municipalities or commercial entities able to gain efficiency in applying those resources.

 

This segment is ripe for innovation for several reasons.  Globally, urbanization continues to be a trend, with increasing populations living in ever closer proximity.  The composition of waste continues to change between recyclables, compostable waste, and landfill waste. As such, resources for the collection, sorting, and disposal of solid waste continue to move towards increased categorization.  Lastly, the waste management industry appears to be receptive to disruptive efforts, as evidenced by cities such as New York that are undertaking significant waste reduction measures.

 

Furthermore, solid waste management is a sector that has largely avoided any significant optimization efforts.  For example, in 2013 the city of Chicago instituted a simple “grid system” for the deployment of its collection trucks and abandoned its previous “ward-based” system.  As a result of this relatively simple change, Chicago was able to deploy 40 fewer trucks per day (320 vice 360) and gained an $18M annual cost savings against a 2013 budget of $166M for the Bureau of Sanitation – an 11% reduction.  As impactful as that reduction was, it was the result of an unsophisticated and non-data driven solution which did not (and does not) take advantage of numerous technological tools for further optimization.

 

Solution

 

Utilize existing data available from GPS and scale sensors on-board collection trucks to collect, analyze, and employ information regarding individual or street level solid waste production to more efficiently employ waste collection resources (trucks, labor, fuel, time, etc).  Armed with an informed picture of the specific house, street, or neighborhood-level of solid waste production, which would become more informed over time with ongoing data collection, the public or private solid waste collection entity could then optimize its resource acquisition, retention, maintenance, and utilization.  Each entity could optimize for route length, “truck-sized” routes (in pounds of waste), a specific shift length, distance traveled, cost, or other desirable optimums.

 

Data Collection

 

Modern collection trucks are currently equipped with on-board scales, GPS systems, and vehicle monitoring systems.  The on-board scale is used primarily to help drivers comply with weight restrictions (e.g. small bridges and weight restricted roads) and avoid exceeding the vehicle weight rating.  GPS data includes time and location and, in turn, speed and number stops. The on-board vehicle monitoring system provides fuel usage data, engine RPMs, and speed. Data from each of these sensors and systems could be downloaded from each truck periodically for analysis and incorporation.  Any trucks lacking these features can be readily upgraded at a low cost. This data, merged in the appropriate way, could produce a data set which readily lends itself to powerful resource optimization algorithms.

 

Exhibit 1: A theoretical example of one truck’s weight over a work day.  Increases in weight allow one to deduce the amount of waste collected at each stop.  Merging this data with time-stamped GPS and operational truck data would allow one to deduce the amount of garbage collected at the block, street, or even individual building level (contingent on scale accuracy).

 

Pilot Program

 

The pilot program would be rolled out in a city with a large concentration of residential neighborhoods that would benefit from increased efficiencies in waste management. The program would begin by ensuring the existing fleet is equipped with the requisite on-board systems to collect data about the quantity of trash picked up at each stop along an existing route, whether at the house, street, or neighborhood-level. During this time, additional data would be collected about the costs of fuel, labor and other operating expenses associated with their route. Over the course of several months, the quantity of trash picked up at individual stops, marked on the GPS system, would be aggregated in a central location. After a sufficient data set is collected, the optimization algorithm would be applied to develop a new route or routes optimized for time, pounds of trash, fuel efficiency, distance traveled, costs, or whichever parameters the municipality or commercial entity seeks to optimize.  The benefits associated with the optimized routes could then be compared against the original routes to determine the efficacy of the program.

 

Commercial Viability

 

Currently, the global waste management industry is valued at $240B in 2016 and is expected to grow to roughly $340B by 2024. With this growth will come a dramatic increase in fuel consumption, labor costs and other operating expenses associated with garbage truck fleets. The opportunity to apply data-based solutions for the optimization of routes, fleet utilization, and labor force would potentially provide billions in annual savings for both waste management companies and consumers.

 

Sources

 

Chicago Department of Streets and Sanitation

https://www.cityofchicago.org/city/en/depts/streets.html

 

The City of New York Department of Sanitation

http://www1.nyc.gov/assets/dsny/site/home

 

Solid Waste Management Market Share & Forecast, 2017-2024

https://www.gminsights.com/industry-analysis/solid-waste-management-market

 

Trends in On-board Scale Systems for the Waste Industry

https://wasteadvantagemag.com/trends-in-on-board-scale-systems-for-the-waste-industry/

 

Real-World Activity and Fuel Use of Diesel and CNG Refuse Trucks

http://www.cert.ucr.edu/events/pems2014/liveagenda/25sandhu.pdf

 

Average Fuel Economy of Major Vehicle Categories

https://www.afdc.energy.gov/data/10310

 

The Economics of Electric Garbage Trucks are Awesome

https://qz.com/749622/the-economics-of-electric-garbage-trucks-are-awesome/

 

Smart traffic signals

Problem

Previous road management systems ran independent of traffic and car information. This results in many inconveniences:

  • Lost productivity and wasted time: Everyday traffic congestion promotes loss of productivity for companies. In many countries, employees spend an excessive amount of time in traffic when they could be working on their jobs.
  • High number of traffic accidents: Inefficient use of traffic lights increases the risk of car accidents.
  • High number of pedestrian accidents: a study suggests that walking in traffic situations is 10 times more dangerous than travelling as a passenger by car. Moreover, this study also suggests that 15% of total people killed in European roads are pedestrians.
  • High environmental cost and increased energy spent waiting in traffic: The stop-start driving and long time waiting in traffic is inefficient and very polluting to cities.  

Addressing these inconveniences by using a computer vision application (real time smart traffic lights) helps drivers, pedestrians, municipalities and even businesses (gains in productive time).

Solution

The solution proposed encompasses two main elements:

  • Two cameras located on one post of a traffic light: one oriented to capture pedestrians, and the other to capture car traffic
  • A machine learning algorithm that is capable of:
    • Recognizing “new elements” in every frame, the camera captures – be it pedestrians or cars
    • Predicting (i) the trajectory of these elements using AI, (ii) the intensity of future traffic
    • Making real-time decisions on the “position” of traffic lights for both cars (green, yellow, red) and pedestrians (white, red).

Effectiveness and commercial promise

For pedestrians, previous applications either didn’t take input from pedestrian traffic or used a manual push-button for input. The proposed solution is definitely much more attractive to pedestrians: it does not require any additional action from their end and still manages to improve their outcomes.

For car traffic, previous smart traffic systems used a system called an inductive loop detector, that is embedded under the street. The benefit of the new solution compared to this lies in the following:

  • Recent studies have shown loop inaccuracies of up to 20%, when compared with actual footage. This lack of reliability, especially in extreme congestion where these solutions are most needed, hinders extreme optimization of traffic
  • Detection of faulty loops is very difficult, and maintaining them often requires blocking roads, which goes against the objective of easing traffic

In terms of outcomes, the prototypes for this technology have started being tested and have shown promising results especially in smooth light-change conditions and when few objects are present.

Anticipated competition

Companies that have access to big sources of data have an advantage to provide solutions using machine learning. Among those companies we could give the example of Google and Microsoft that are trying to revolutionize some industries by developing advanced analytics for their customers. In terms of traffic management solution these companies could create partnerships with the government and municipalities to develop integral solutions to enhance the traffic management. Currently they have showed some efforts in the field:

Google: The company is already involved in the traffic management field with his applications Google Maps and Waze. Those could help them have a important source of information to continue developing solutions. Recently they have backed up a venture, through google ventures, called Urban engines that is working in aggregating traffic data to create predictive models that will give users the ability to improve routes management and avoid congestion based on what time of day it is and what’s happening in the area.

Microsoft: Microsoft has azure, his cloud analytics tool. With it they are helping companies develop specific tools to improve their businesses through analytics. In terms of traffic management they are involved in developing sustainable cities through CityNext, an initiative that they are sharing with various companies, such as Cubic, to solve transportation issues.

Alterations proposals

Apart from the current, and above mentioned, applications that computer vision is being used for in traffic lights, we propose the following different uses from which society can benefit from:

  • Response to emergency cars (e.g. ambulances, fire trucks). Many more lives could be saved if ambulances and fire trucks arrived on time at the emergency point by having traffic lights change in their favor.
  • In many countries, specially high-crime countries in Mexico and LatAm, many robberies and kidnaps occur during red lights. Some countries have allowed drivers to run pass a red light after midnight for security reasons. However, this increases car accidents. Therefore, crime prevention is another great use for this technology.
  • With the use of the cameras and machine learning algorithm, it will be possible possible to have a more accurate speed management in which the speeding car can be recognized through the cameras.

Sources:

Usefulness of image processing in urban traffic control, Boillot (https://ac.els-cdn.com/S1474667017438730/1-s2.0-S1474667017438730-main.pdf?_tid=efc0cf01-ae0b-4c44-831c-4295172c54b0&acdnat=1522637076_32242720358f549ac1c0a6a27df47e31)

Computer vision application: Real time smart traffic light (https://pdfs.semanticscholar.org/d1c3/bfc4e8ff2861137da2af817e4fbe709339da.pdf)

Traffic management startup backed by google: Urban engines

(https://www.roadsbridges.com/traffic-management-google-backed-startup-vies-predict-congestion-patterns)

A New Smart Technology will Help Cities Drastically Reduce their Traffic Congestion

(https://www.pastemagazine.com/articles/2017/04/a-new-smart-technology-will-help-cities-drasticall.html)

Microsoft CityNext

(https://enterprise.microsoft.com/en-us/industries/citynext/sustainable-cities/transport/)

 

Team:

Francisco Galvez

Stephanie Saade

Marisol Perez-Chow

Luca Ferrara

Caitlyn Grudzinski

Wing Kiu Szeto