cr8 – the world’s first autonomous content creation platform for visual images and video – Is asking for $175K (Team: Fantastic 4)

cr8: the world’s first autonomous content creation platform for visual images and video

Problem:

World-class Brands work with platforms such as Instagram and YouTube to create, host, and share their respective content. However, it is unclear to those companies how much of their brand content affinity social media metrics (likes, loves, and upvotes) actually point to user affinity for the brand. In addition, it is challenging for these Brands to turn those social media metrics into targeted and actionable marketing strategies. For example, it would be hard for Nike to determine what type of Nike shoes a certain consumer would be most likely to want to buy next simply by the fact that the user liked the Nike page on Facebook.

What is cr8 and How it Helps Consumer-Brand Companies:

cr8 is the world’s first autonomous content creation platform for visual images and video. Our proprietary algorithms allow users to create engaging images and video in real-time. We help capture and monetize the “creative graph” for the most valuable demographics, globally by giving Brands a novel way to access and engage consumers on these platforms. We do so by reversing the content affinity process by becoming user generated content creation and distribution platform. Fundamentally, we believe that user generated content (leveraging brand assets such as a Nike shoe) is the most valuable measure of consumer affinity. We give brands access to new data streams, for example, how consumers are leveraging their brand assets to create content for the internet or communicate with friends. This level of visibility and understanding of consumer preferences will help us establish true brand affinity.

Additional example: Using the same Nike example as above, if a user creates content using specific Nike clothes and accessories, the company Nike will be able to more effectively market specific products to that user based on current and previous content generated by that person.

How We Do It

By using machine learning, we are able to train algorithms on millions of visual datasets within specific brand verticals. Users have access to our proprietary trained algorithms and can apply it to their targeted image or video to generate new content. This new content is a merge version of the two images or videos to create the autonomous user-generated content.

Users then have the opportunity to share their content across multiple platforms (i.e. Facebook, Instagram, Pinterest, Snap Chat, or text messages). cr8 then captures the creative data graph of the user base and leverages this data to better understand the “true” brand interests of the most valuable demographics, globally and at scale.

Competitive Landscape & How We Are Different

Content creation and sharing platforms are a massive market, worth billions of dollars (i.e. Facebook, YouTube, Giphy, etc.).

Our competitive advantage stems from our proprietary technology that creates a sticky user generated content that increases barriers to entry. Our first-mover advantage and open platform model, position us to generate high-value to Brands who spend significant billions on advertising, sometimes without understanding the efficiency of their marketing spend. Reinvented content drives key insights into consumers’ true brand affinity increasing value to brands in ways no other company can do. We provide this valuable insight to Brands which, in turn, encourages Brands to deepen their relationship with us, resulting in more content and stronger deeper learning algorithms.

Business Model

We generate profits through three revenue streams:

  1. Content licensing fees for user generated content leveraged by 3rd parties
  2. Subscription fee charged to brands for access to cr8 data analytics platform
  3. 20% fee charged to content creators that generate ad dollars from content created on platform

Our Ask

Our team is asking for $175K pre-seed investment. The money will be used for:

  1. MVP development, which requires one machine learning contractor
  2. Design needs, Front-end UX/UI design contractor
  3. Initial growth marketing budget

DevGame – Continuous Feedback Crowdsourced Video Game Development – Is Asking for $150K (Team: YATR)

Opportunity

The video game industry is a large and rapidly growing market, totaling $108.9B in revenues in 2017.  Moreover, a dominant trend that has emerged over the last decade is a significant increase in the proportion of people playing video games on mobile platforms such as iOS and Android: in 2017, 42% of the video game market was attributed to mobile games played on a smartphone or tablet.  Popular mobile games yield substantial daily revenues, with Fortnite, one of the highest grossing iOS games, generating over $1M of daily revenue.  Angry Birds, costing only $140k to develop, generated $70M of revenue in its first year.  However, due to the same financial upside based on the large number of users and relatively low entry barriers, the mobile gaming segment is quite competitive with a large number of firms of varying sizes.

In this competitive space, few developers have been able to consistently produce commercially successful new content.  We believe this is at least partially attributable to inabilities to generate and commercialize successful creative content.  Two examples of this are Rovio and Supercell. After significant successes with Angry Birds and Clash of Clans, respectively, both firms have struggled to maintain user bases and revenue for these games and introduce new franchises with the same high level of commercial success.

Companies who develop and publish video games require a continuous source of unique and compelling content in the form of new game concepts, franchises, features, and expansions for existing games to remain competitive in the global video game industry.  Given that firm resources are limited in scope and scale, the firm’s organic creative resources and development capabilities will always be unable to globally maximize the scope and scale of the firm’s creative ideation. However, simultaneously, there is some level of latent creative ideation resident within the “crowd” of the online gaming community, where a diverse set of creatively-inclined individuals may be interested in providing creative ideas for these purposes in return for some reward.  Yet this “crowd” has neither the programming, development, and distribution resources of a video game company, nor the resolve and inclination of a commercial entity – both of these elements are required to move creative ideas from raw ideation to commercialized content.

In general, the current development process for the majority of game designers involves creating an initial game idea from within the firm and then putting it through various rounds of testing for commercial promise.  Firms generally develop the idea into a game, play-test the game in-house amongst employees, and then move to a limited release in a specific market or markets before global release. Firms make decisions to continue or discontinue development and release at various points throughout this process based on a variety of factors such as reception in the test market and size of the potential user base.  As mentioned above, this in-house, organic process relies on a limited and costly resource base (the firm’s employees) for the generation and vetting of nearly all creative content, features, expansions, add-ons, themes, and more.

 

Solution

DevGame will create value in the mobile gaming market by incorporating a crowdsourcing and machine learning feedback loop to decrease the cost of content creation and increase responsiveness to the tastes of the market. More specifically, DevGame will develop mobile platform video games with content selected and optimized by supervised machine learning, based on current and historical video game success. The games will receive regular content updates, where new content is sourced and selected from a combination of crowdsourcing and machine learning in a constant feedback process.

The machine learning algorithms will be designed to measure video game success as a function of a video game’s features or content. “Success,” in this regard, can also be defined as a function of numerous performance metrics, for example, revenue, number of users, number of new downloads, rate of new downloads, average user rating, and expert rating, to name a few. The initial tasks, therefore, are obtaining quantified video game features and performance data, the latter of which is readily available.

 

Quantifying Video Game Features

The first task is to aggregate and quantify video game features, both in relative and absolute terms. In order to accomplish this goal, DevGame will use a supervised latent dirichlet allocation (LDA) process to “scrape” common features from expert reviews on video game websites. The quantification of these features will occur in two distinct phases, basic and advanced. In the basic phase, the same LDA process will provide a “weight” to the feature for a specific game from a specific website, providing a relative comparison between games, and an aggregate value when the entire set (or subset) is taken into account. The advanced phase of feature quantification will use convoluted neural networks (CNN) to analyze videos of games being played, where images can be mapped to the features generated by the LDA process. As a result, the advanced phase will produce a far more sophisticated and objective measure of the feature list than is achievable through LDA alone.

 

The Creative Ensemble

With the features and performance metrics quantified, the “success” algorithm will generate features or a combination of features that can be used as the starting point for DevGame’s creative department. In this sense, DevGame is not eliminating the necessary human component of creativity; instead, DevGame is using machine learning to narrow the infinite list of possible content for the creative department based on a more accurate understanding of user preferences. This process will improve game throughput, reduce the variation of game success, and improve the product for customers.

 

Game Updates

The above process outlines the starting point for a game, where a fixed subset of features will be selected for a particular game type (eg, first person shooter), resulting in the release of the initial version, known as the “sandbox.” Users will download and play the game for a short period of time (eg, one week), before DevGame updates the game with the list of the next set of features to be implemented. This set of (initially) 3 features will be selected by the algorithm as the next “best” set of features to maximize game success based on the current version of the sandbox. Of the set of 3 features, 2 will be crowdsourced and crowd-selected, while the third feature will be chosen by the algorithm based on the outcome of the 2 crowdsourced features. For crowdsourced content, users will be randomly divided into two groups. The first group will provide the content for feature #1, while the second group will rank the content provided by the first group. The same process, with roles reversed, will apply for feature #2. Once the crowdsourced features are selected, the algorithm will supply the quantified feature #3, based on the new current state of the sandbox. DevGame will then incorporate all ideas into the next release of the game, updating the sandbox, and starting the feedback process over.

 

Demonstration

The demonstration for DevGame needs to answer a short list of questions:

  • Can a supervised LDA process produce usable common features across video games?
  • Do these features serve as meaningful indicators of game performance?
  • Can we incentivize and produce meaningful, crowdsourced content?

As such, we would generate a short list of iOS games and review websites from which to scrape and quantify game features. Based on the small sample size, the early list may require meaningful supervision in order to aggregate generated topics into usable features. Once this task is accomplished, it will be a task of modeling the effects of the features on the aforementioned game performance metrics.


With respect to ensuring a productive, crowdsourced process, numerous applications exist that demonstrate such value; the online community Steam already serves as a usable reference point for generating crowdsourced content in a video game setting. The platform has seen the positive and negative aspects of linking developers with users for game selection and content sourcing, highlighting the need for a structured process.

 

Pilot Program

The first step of the pilot program will be to outsource construction and maintenance of the algorithm to a small set of data scientists from which to develop the sandbox. Once this effort is complete, the initial creative sandbox will be generated via storyboard using DevGame’s creative resources. This concept will then be realized into an iOS game through a contract development team, who would also be retained for the short duration of pilot for the game update process. Finally, the game, including the crowdsourced feedback loop, will be tested in a limited market that is particularly suited for video games and creative content (eg, a university or a small number of universities).

 

Funding Requirement

The initial game release will be a free app with in-game purchases. The primary need for funding is the initial game development, which industry estimates place at $150,000.  At an average revenue per user of $5.00, which is in line with industry averages, the breakeven number of users would be just 30,000, which is a readily attainable goal. The mobile gaming market had 2.8 Billion monthly active users at the end of 2016.  If DevGame reached just 1% of the number of users reached by a single prominent competitor, it could conservatively achieve $24M in annual revenue.

 

Conclusion

We believe that a combination of selective crowdsourcing and machine learning algorithms can be applied to the game development process such that developers are able to more efficiently produce content that contains both the necessary conditions present in successful games and innovative content generated from critical analysis of the games.  Through the use of algorithmic tools, we will be able to analyze what features and content correlate to different measures of success within a game and then release that content to optimize revenues for a game.

 

Sources

https://thinkgaming.com/app-sales-data/

https://thinkmobiles.com/blog/how-much-cost-make-app/

https://www.codementor.io/blog/how-much-does-it-cost-to-make-an-app-in-2017-1nqj6ehste

https://www.ft.com/content/64fb0c4e-e5f0-11e5-bc31-138df2ae9ee6

https://www.wsj.com/articles/growth-slows-at-clash-of-clans-maker-supercell-1457522014

https://www.ft.com/content/1d162f88-f375-11e6-95ee-f14e55513608

https://www.reuters.com/article/supercell-results/update-1-gaming-firm-supercell-2017-profit-drops-as-clash-of-clans-sales-cool-idUSL8N1Q43HO

http://www.player.one/clash-clans-update-players-not-happy-supercells-new-compromises-take-forums-504407

https://venturebeat.com/2018/02/14/supercell-2017-results-810-million-in-profit-2-billion-in-revenue-without-a-new-game/

https://steamcommunity.com/games/593110/announcements/detail/558846854614253751

https://steamcommunity.com/games/593110/announcements/detail/1328973169870947116

https://www.statista.com/statistics/246888/value-of-the-global-video-game-market/

https://motherboard.vice.com/en_us/article/ypw3yw/the-video-game-by-committee-was-an-epic-disaster

https://www.collegeraptor.com/find-colleges/articles/college-comparisons/5-best-colleges-gamers/

 

Team Members:

  • Patrick Rice
  • Samuel Spletzer
  • Matt Nadherny
  • Thomas DeSouza

AptHero – The AI Solution Simplifying Apartment Rentals – Is Asking for $185K (Team: Deep Learners)

Opportunity:

The apartment rental industry is a $163bn industry, serving over 35 million Americans. Over the last 5 years the industry has had a growth rate of 5% and continues to show this level of growth. 2017 was the highest year of new apartments being built in the last 10 years (346,310 apartments), which creates a tremendous opportunity in the future for an increased high rate of apartment renting. Additionally, there has been an increase in apartment rentals since the recession and housing crisis. It seems people are more fearful of owning homes and making such a commitment, that rentals have increased. At the same time, the age of marriages has increased, which is often when home buying occurs, leaving more people single for longer and therefore prolonging their time in the rental market.

Another opportunity is the amount of spam or fake listings that plague the apartment rental industry. Even sites that claim they are highly reliable (Zillow, Trulia, etc.) each post advice to their users as to how to avoid scam listings, because the platforms are unable to stop these scammers entirely on their own. In 2017, researchers at NYU reviewed more than 2 million for-rent posts and found 29,000 fake listings in 20 major cities, about half of which the sites were not able to detect themselves. Being able to detect and eliminate these scams would simplify the rental experience greatly.

Finally, in reviewing the set of data gathered in relation to the apartment rental process, we see a large opportunity for improvement. Just under 60% of those surveys had moved more than 1 time in the last 5 years, and a majority used primary sites like Craigslist, Zillow and ApartmentList. Of those surveyed, we found there was quite a bit of overlap between what users value most and what they find most frustrating in the process of finding an apartment, which indicates a clear needs gap. In particular, it is clear that people could use improvements when it comes to determining the following: the safety of certain neighborhoods, nearby bars and restaurants, noise level, nearby grocery stores, and parking options.

 

Solution:

AptHero uses artificial intelligence and machine learning to address the major deficiencies of the rental process. We use the power of technology to provide more complete information to users, curate listings based on their preferences, and address the significant problem of fraudulent listings. These three aspect of our service are currently not addressed by competitors. The major aggregators of rental listings such as Craigslist currently only offer basic filter capabilities such as price, neighborhood, and some apartment and building specific amenities. Other criteria important for consumers such as noise levels, traffic and crime patterns, proximity of playgrounds, etc. can only be assessed through multiple manual searches or are completely unavailable. The manual work involved in carefully comparing even ten properties becomes quickly overwhelming so people turn to real estate agents for help. Agents can have conflicting interests and are only available on certain days and times. Moreover, whole markets such as San Francisco completely rely on online search without the benefit of agents.

AptHero allows users to easily visualize a wealth of data about the surroundings of a property to help them make better selections. Users can select to see relevant information for a given listing such as the proximity of schools, parking, pet parks, playgrounds, bars and restaurants, grocery stores, or display on the map an address that they will visit frequently such as their office location or their child’s daycare. In addition, we aggregate data on crime, traffic patterns, and noise levels. Finally, users have access to price trends. When users select properties that they like, the algorithm learns dynamically about their preferences and suggests other listings that may be of interest. This greatly shortens the search process, increases user satisfaction, and benefits property owners and managers by showing their listings to the most interested users.

AptHero’s other value-added service is detecting fraudulent listings which is an unmet need expressed by our survey respondents. The algorithm learns over time how to flag fraud. It starts with a training dataset of fraudulent listings and continuously scans listings for suspicious elements such as vague details, asking for deposit without the ability to view the property, asking for deposit without asking for background information, owners who are “out of the country,” rent “too good to be true,” request to wire money, etc.  

 

Pilot:

Our pilot will be focusing on developing and proving out the key features of our platform that would transform the apartment search market. We have a demonstration of demand based on both personal and surveyed feedback on the lack of a cohesive platform that demonstrates the following characteristics:

  1. Street and neighborhood information to provide highly requested visual prowess
  2. AI-powered dynamic, criteria-based matching without sifting through hundreds of listings
  3. Highly capable fraudulent ad detection
  4. Transparent pricing

First, we will work to develop an AI powered curation algorithm. This would require users to initially select criteria and review a select number of listings after which the algorithm would dynamically learn and curate apartment listings. Our pilot would build this initial algorithm using mock selection data and volunteer data.

Second, we would in parallel build an AI powered fraud detection software. Human generated rule sets are the primary practice today, but we seek to create a system that will learn, predict, and act on elimination of fraudulent postings. With craigslist being the most highly used platform, it is also the most prone to fraudulent listings. While competitors have fraud detection, fraud has key characteristics: long tail of many unique cases, quick pattern changes, and a highly dynamic opponent. We seek to combat with a machine learning tool that will reduce false positives, manual reviews, and over-intrusive countermeasures.

Third will be the development of a transparent pricing model using big data. Analogous to True Car’s successful price transparency, we would collect pricing information on apartment leases and pass it on to consumers cost-free based on actual transactions relayed by brokers or available through other platforms. With weekly updates, customers can enter in the zip code they are looking for and get a dynamic pricing report that they can bring to a broker to eliminate the hassle of price opaqueness.

Lastly, we would partner with tools such as Google Street view to use AI curated feeds of neighborhood views and feels that would not only provide mock views from the home, but also views of attractions and key locations nearby.

 

Competitors/Risks/Feasibility:

There are many real estate websites in the United States. For example, companies such as Zillow and Trulia received 36 million and 23 million monthly visits, respectively, as of July 2017. However, these companies have a very broad focus and are targeted towards real estate agents as opposed to the rental market. For example, 71% of Zillow’s revenues come from Premier Agent, which is a suite of marketing and business technology products and services geared towards real estate agents and brokerages. Zillow only made 10% of its revenues on rentals by advertising to property managers and other rental professionals (on a cost per lead, cost per click, or cost per lease basis). Beyond having a broad scope, many of these more established real estate websites are tedious to use and aren’t personalized to individuals.

There are several newer companies that are trying to leverage AI in the real estate market. For example, REX Real Estate analyzes hundreds of thousands of data points to identify likely buyers. The company analyzes data such as recent purchase decisions and history of home ownership in order to target potential buyers with ads. However, REX Real Estate is focused solely in the buyer/seller market rather than the rental market.

Further, companies such as Airbnb are utilizing AI in the short-term rental market. Airbnb leverages AI in three key areas: “search ranking and matching of hosts and guests; empowering hosts to understand how factors such as pricing affect their business; and keeping the community safe from issues like fraud or abuse.” Airbnb helps demonstrate the value of using AI for listings and while there is some risk that the company could leverage its technology in the rental market, the company is dedicated solely to the travel market. Airbnb markets itself as the “global travel community that offers magical end-to-end trips” and offers short-term rentals as well as tools for a better trip such as experiences and restaurants. The needs of customers on Airbnb versus the rental market are significantly different – in terms of pricing (e.g., daily rate versus monthly), location needs (e.g., focus on tourist sites with Airbnb versus neighborhood, traffic patterns, parking, safety, schools, etc. in the rental market), and how much time consumers invest in the process (e.g., review a couple of listings on Airbnb versus looking in person with the rental market). As such, the customer needs as well as the requisite algorithms are substantially different.

 

Funding:

We are asking for $185,000 to cover 1 design engineer, 1 application engineer, and 1 data scientist. This will provide AptHero enough runway to create a pilot product for our first city. With this money, we believe that AptHero can position itself as solely focused on the rental market, starting small and niche, and marketing ourselves as the rental company that is using AI to simplify apartment rentals.

 

Sources:

 

Team Members:

Sam Steiny

Rosie Newman

Gergana Kostadinova

Javier Rodriguez

Radiotek – Machine Learning Driven Mammogram Analysis | $200k ask by Analytical Engines

Opportunity

Mammography, the first line of defense in detecting breast cancer, is a high-volume area of medical practice: roughly 39 million mammograms are conducted in the US alone annually.[1] Mammograms are performed using a low-dose x-ray machine that sends ionizing radiation through the body structure. The product is the black and white x-ray image with which most patients are familiar. While medical X-ray technology as a whole underwent digitization at the turn of the 21st century, the newly digitized visual output remained black and white, simply a high-resolution version of the familiar image on which thousands of medical practitioners had been trained. Thus the medical science of reading a diagnostic mammogram remained essentially unchanged.

 

We believe this digitization without transformation was a dramatic missed opportunity. First, while digitized diagnostic x-rays do capture a rich range of grayscale (anywhere from over 4,000 to nearly 66,000 shades, or 12-16 bits/pixel), their usefulness is constrained by the number of gray shades a computer monitor can display (most of which have caught up to current imaging machines, but which still remain a funnel point as diagnostic machinery improves).[2] Second, and more importantly, the human eye can only distinguish approximately 30 of these many thousands of shades of gray.[3] Thus, even the highest-trained radiologists have a naturally limited capacity to distinguish normal and abnormal structures within the breast when represented in grayscale.

 

A commonly used tool today is Computer-Aided Detection (CAD), which reviews the digital x-ray, searches it for abnormal areas of density, and displays a visual marker (such as a yellow triangle or red square) near these places for the radiologists to carefully visually review. CAD encourages more careful image review by the radiologist, and its use resulted in a 6% increase in early detection over non-CAD image reads.[4] Yet CAD does little to address the difficulty of reading the x-ray in the first place, particularly as relates to the 40% of women with dense breast tissue. It fails to apply the fullness of computational power to analyzing the x-ray.

 

Consumers (both patients and radiologists) have a clear interest in reducing the instances in which an x-ray because it provides insufficient information due to lack of visual clarity, necessitates additional diagnostic imaging. Both parties also have an interest in reducing false negatives (times when cancers are present but visually undetectable within the x-ray) as well as false positives (When normal structures or non-cancerous growths are detected yet cannot be distinguished from cancerous growths). All of these contribute to increased health costs, pressures on patient mental health, and burdens on time and talent within the medical practice. The challenge is even greater in countries like India, where only about 10,000 trained radiologists exist, woefully undeserving a nation of 1.2 billion.[5]

 

Solution

Our company, Radiotek, will apply a smart algorithm to grayscale x-rays, analyzing groupings of pixels that relate to one another in grayscale intensity to apply distinct color sets to the previously undetectable groupings. We would apply supervised machine learning techniques and utilize training data from thousands of patients. Along with the images/scans from these patients, we would also need the outcomes. The quality of our technology would vary depending on the data on outcomes that we can collect, ranging from biopsy results to a much simpler scale for categorization of outcomes.

 

Our concept hopes to make a leap forward in breast cancer detection by building beyond the static algorithmic approach to bring the power of deep neural networks to abnormality detection in medical imaging. We aim to utilize convolutional neural networks for image processing in order to allow for filter-based detection of interesting features in mammograms. The end result for the clinician is a conversion from grayscale to color in an analytical manner, not merely one-to-one, revealing structures and their relationships to one another in ways the grayscale previously obscured. [6]

Fig. 1 – A high level view of convolutional neural network architecture used in face detection

The end goal is to produce a tool that will support the radiologist in speedy decisions, and perhaps even provide preliminary diagnostics in cases of extremely high confidence. We would also need to ensure that the algorithm is capable of reinforcement learning, in order to get feedback, learn from mistakes and ultimately move towards goals set by the radiologist or hospital in order to provide the most efficient assistance.

 

Empirical Demonstration

 

There are a number of competitors in the US, among which one, called Imago Systems, has demonstrated early success with the static version of this algorithm. Their patent-pending Image Characterization Engine (or ICE) has already launched clinical trials and is currently in the FDA review process. Below is a depiction of the ICE at work on a breast cancer case: [7]

Fig. 2- Imago Solution Implementation7

 

With confidence in the potential of a static algorithm, we can begin building the deep neural network. With insights from experts in diagnostics and from our research of the Indian market, we believe that in order to penetrate the market we would need to partner up with large private hospitals or research institutions in Tier I or Tier II Indian cities. Due to a more robust infrastructure and their interest in specialization, we plan to readily obtain historical de-identified clinical data on patients confirmed with or without cancers, calcifications, and other abnormalities. A partnership with these organizations will also provide a natural entry point into the market for building proof of concept.

 

Another competitor, Zebra Medical Vision, has tested their machine learning based algorithm and has gotten 91% sensitivity and 80% specificity – as good a performance as any published results for radiologists working with or without computer-aided detection (CAD).[8] Additionally, artificial neural networks are already in use in other diagnostic areas.[9] Given their established presence in the medical arena, we believe the time to functionality should be reasonable. Upon achieving this, we envision giving the radiologist a front-end device such as an iPad which will run the Radiotek API. The API will be communicating with multiple AWS servers (which can respond to API, run model etc.). This will ensure low initial costs for us as well as a seamless experience for the user.   

 

Commercial promise

Both the Indian market for healthcare in general, as well as the market for radiology have been experiencing around 15-20% CAGR in recent years and are projected to continue at the same rate. The current market size of radiology is estimated at ~$1.2 bn, with high concentration in major metropolitan areas.[10] There are a number of unique benefits for Radiotek here:

 

  • High growth sectors: Medical infrastructure market growing at 15%. Hospital services market currently valued at $80 billion and accounts for 71% of the industry revenues. Steady population growth and increasing insurance coverage. [11]

 

  • Business-Friendly government: Low customs duty rates on medical service (9-15%) Government of India has permitted 100% FDI for all health-related services under the automatic route.

 

  • First-mover advantage: Machine learning based diagnostics market not as competitive, and ability to capture unique market. In addition, our business model has a stronger customer stickiness element with higher switching costs for our customers.

Fig. 3- Indian radiology market geographic breakdown10

 

Ultimately our product would be sold under a license in which a medical center would pay for access to the algorithm, would house and store the image output within its own data centers (to conform with user privacy regulations), and would receive regular updates as the program continues to learn and improve. Users would pay the initial installation and licensing fee, then would pay monthly fees for both updates and volume usage. We would incentivize high volume usage to increase the volume of cancer detection data informing our system, and thus would provide tiered discounts to high volume users. Moreover, our product would initially be a diagnostic tool that would be validated by radiologists for every scan, but as we scale, we would allow for the tool to clear healthy patients with a high confidence level, therefore allowing other medical personnel such as nurses or technologists to read the scans and make decisions on non-complicated results. This would be truly disruptive in changing the landscape of the Indian radiology market and making it more accessible.

 

Financials

Lower wages in the Indian market will alleviate our labor expenses. We expect to hire two well-experienced data scientists, who would be able handle the development of the algorithm, a sales representative to be able to initiate engagements with potential customers and management at a total of labor wages of $150,000 in the first year. In addition, we expect to incur storage and computing costs through AWS EC2 at a total of ~$5500 in the first year, $20,000 for market research (industry dynamics and sales process in India), and $10,000 for  misc expenses (computers, travel etc.)

 

Year 1 Year 2 Year 3
Revenue $54,000 $282,000 $1,002,000
Operating costs $156,000 $315,820 $365,700
SG&A $14,500 $20,000 $25,000
Other costs $29,500 $20,000 $30,000
Total Cost $200,000 $355,000 $420,700
EBITDA -$146,000 -$73,820 $581,300

 

If we are able to secure our ask of $200k we expect to be able to cover our expenses in the first year and be able to apply for a $100k loan at an interest rate of 8% that will be covering our expenses in the second year. We will then be able to produce sufficient sales beyond that with 3 license sales in our first year, 15 in our second year and 50 in our third year (break-even point) at a $12,000 installation fee and $500 usage fee per month for each license.

Fig. 4- five-year revenues & net income projections

 

Challenges

    • Regulatory challenges: Murky regulatory landscape with bodies such as Indian Medical Association and Ministry of Health and Family Welfare (MOHFW). Less stringent but also less clear cut than U.S. FDA.
    • Competition: International competitors like Imago and Deepmind as well as Indian home-grown startups such as Niramai. But different approaches/shortcomings like static algorithm, less focus on building neural network, and alternative approach to invasive radio scans, respectively.  
    • Data volume: Needing sufficient clinical data. Could work with de-identified historical data and relevant health information to provide full picture of cancer screening.
    • Data storage: Need to find secure way of storing patient data, need to be careful about sharing data between hospital networks. Upon talking to data scientists, suggested ‘secure enclave’ on hospital servers housing data within which neural network can be trained and then removed, leaving data inside.

 

 

Sources

 

  1. JoNel Aleccia, High tech mammogram tool doesn’t boost detection, study shows (Seattle Times, September 28, 2015) https://www.seattletimes.com/seattle-news/health/high-tech-mammogram-tool-doesnt-boost-cancer-detection-study-shows/
  2. Tom Kimpe & Tom Tuytschaever, Increasing the number of gray shades in medical display systems — how much is enough? (J Digit Imaging, 2007). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3043920/
  3. Francie Diep, Humans can only distinguish between about 30 shades of gray, (Popular Science Online, February 19, 2015) https://www.popsci.com/humans-can-only-distinguish-between-about-30-shades-gray?dom=tw&src=SOC
  4. BreastCancer.org, Computer aided detection mammograms finds cancers earlier but increases risk of false positives. http://www.breastcancer.org/research-news/20130425-4
  5. Dr. Arjun Kalyanpur, The Teleradiology Opportunity in India. https://www.entrepreneur.com/article/280262
  6. Lawrence, Steve et. al., Face Recognition: A Convolutional Neural Network Approach. (IEEE Transactions on Neural Network, 1997) http://www.cs.cmu.edu/~bhiksha/courses/deeplearning/Fall.2016/pdfs/Lawrence_et_al.pdf
  7. Imago Systems website, http://imagosystems.com/
  8. Ridley, Erik. AI Algorithm uses color to better detect breast cancer, 2018.
  9. Artificial neural networks in medical diagnosis, Journal of Applied Biomedicine, 2013. https://www.researchgate.net/profile/Eladia_Pena-Mendez/publication/250310836_Artificial_neural_networks_in_medical_diagnosis/links/5698d76608aea2d743771eef/Artificial-neural-networks-in-medical-diagnosis.pdf
  10. Redseer Consulting, 2018: http://redseer.com/articles/indian-diagnostic-market-shifting-to-preventive-care/
  11. U.S. Government Export Website, Indian healthcare Industry: https://www.export.gov/article?id=India-Healthcare-and-Medical-Equipment

 

Team Members:

Mohammed Alrabiah

Tuneer De

Mikhail Uvarov

Colin Ambler

Lindsay Hanson

Augmented Einstein: Pavlina Plasilova, Kelly de Klerk, Yuxiao Zhang, Aziz Munir, Megan McDonald | $175,000 ask

Opportunity:

Recovering from an injury is a lengthy process throughout which physical therapy plays a crucial role. The effectiveness and speed of recovery depends not only on the quality of the physical therapy provider but also on how diligently and precisely a patient follows his/her prescribed home exercises. Research shows that lack of compliance with home exercises is a major limitation to the effectiveness of physical therapy, and can slow down an injury recovery process. [1] For example, in a study by Bassett up to 65% of physiotherapy patients were non-adherent or only partially adherent to home exercise regimens as prescribed by their doctors. [2]  Additionally, ensuring the accuracy and effectiveness of home exercising is challenging even for patients with the best intentions without the advice and oversight of a physical therapy expert.

Furthermore, it can be challenging for patients to make the most out of in person physical therapy visits. Therapists often oversee multiple patients simultaneously in one room, relying on support from less qualified staff if at all. In the typical model, a physical therapist spends the first 30 minutes of a therapy session with the patient assessing their current needs. Then, the therapist uses this assessment to develop an exercise plan that the patient completes during the second half of the therapy session. During the latter half of the session, the patient is monitored and assisted by additional staff with less training and qualifications. The result is that a patient typically only spends half of a therapy session that s/he is paying for receiving the attention of a qualified specialist. Without enough proper guidance and corrections early on, patients often learn to perform exercises incorrectly, reducing the effectiveness of physical therapy sessions and slowing down the recovery process.

Solution:

APTitude is a software application that combines medical and anatomical science, physical therapy practices, computer vision algorithms and machine learning to help physical therapists track the improvement progress of their patients and suggest improvements or changes to their regiments in real time through regular observations. The technology uses computer vision and machine learning technology to analyze physical movement, identifying nuances in movement patterns by comparing against a database of observations. APTitude provides alerts, tracking and predicts the best individualized physical therapy regimens to help patients recover in the quickest way possible.

Doctors or physical therapists input patient data, the details of the injury, plus the initial exercises into APTitude’s computer system.  The patient then performs the exercises in front of a camera (either webcam or smartphone) that monitors his or her movements over a period of time while the program tracks progress,  suggesting corrections for proper form, new exercises, or higher levels of intensity based on the level of improvement. Patients will be able to use the program at home outside of regular physical therapy sessions via smartphone app to help them monitor the quality of their home exercises achieving faster progress, which will result in a better overall patient experience with a specific PT location. This will also allow doctors and physical therapists to monitor whether or not the patient has performed the necessary at-home exercises and adjust the regiment based on the patient’s true progress.

APTitude uses machine learning algorithms that continuously attribute certain movements to positive or negative improvement based on the patient’s injury and physical characteristics and evolve with the addition of more users and observations.  As APTitude makes more and more observations from different users, its accuracy will further improve. These observations will also be applied to the software’s future ability to prescribe movements for an individual that is showing signs of physical weakness in a movement.

Proof of Concept:

Studies have shown that application of motion capture data to medical practices is feasible. According to an article in the Journal of Physiotherapy & Physical Rehabilitation, neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. [8]

Business Model:

Target Customer

Initially, APTitude would primarily target physical therapists, as they would realize the most value through patient compliance and operating cost reduction. As more data is collected and proof points obtained, APTitude would then target insurance companies to a) reach more physical therapists b) have insurance companies take on a portion of the costs so physical therapists and patients receive a subsidy.  Insurance providers will be incentivized because of the long-term effect this will have on surgery reductions.

Revenue Stream/Structure

  • The software will initially be offered on a free trial basis to PT offices in order to obtain data from those observations.
  • Once the 3 month trial period is over we will charge a $100 license fee per PT location and a $7 fee per patient profile on the platform.
    • We explored the idea of charging a fee based on total data usage per office, but the fluctuations in payment sizes would disincentivize PT offices from signing up
  • There will be a premium version of the software platform that offers new exercises to perform once a patient has reached competency in their initially prescribed exercise regimen to aid faster recovery. How much a patient can progress further outside of the prescribed weekly regimen will be monitored and limited to avoid any potential negative effects.
    • If a patient suffers an injury in the future or wants to check their movements, the software can be accessed on their phone with limited capabilities. The program will only tell them if they are injured or needs to consult their PT and where their injury is likely stemming from. This allows the PT to receive more patients through a referral like system and increase their revenue as well as ours.

Effectiveness, Commercial Promise, and Competition:

Initial market research suggests that there is significant opportunity for this product to be commercially successful. The physical therapy market is highly fragmented, lacking clear front-runners; no single market participant controls more than 5%. [5] Currently, it attracts $35 billion in revenues, with 3.9% annual growth. [4] According to the Bureau of Labor Statistics, the job outlook for physical therapists in the next 10 years is 28%, or “much faster than average.” [16] At the same time, patient retention is a significant challenge for physical therapists, and this product has high potential to improve retention rates. By improving patient compliance, we anticipate a decrease in attrition rates, which can be as high as 40% by the seventh outpatient visit.

We believe that macro level trends will provide a tailwind for our business as we have an aging population in the US that are heavy consumers of physical therapy as well as a rising demand for an active lifestyle. These demographic trends along with increased insurance coverage should boost demand for PT. Additionally, people are employed in more sedentary jobs on average, which is leading to physical injuries when going from sitting all day to activity. Average outpatient course of treatment consists of approximately 10 visits, and insurance reimbursement rates and the number of visits approved for PT services are largely stagnant or decreasing. [4]

Job Outlook for Physical Therapy:

Competitors:

Competition for APTitude will vary depending on target market, predictiveness as well as use of hardware and robotics.  Organizations such as Bionik Laboratories and Hocoma integrate heavy robotics to evaluate and support physical therapy and rehab of patients with severely impaired patients.  Few companies are using AI to develop predictive (along with evaluative) capabilities to support either PT or Sports / Exercise needs, with the exception of PhysiMax that is geared toward professional athletes.

Pilot Program / What funds will be used for:

We are requesting $175K in funding over two years to hire technical experts and create a cloud platform to store our data. Marketing and sales staff will be managed by the founders at no cost.

  • Technical experts needed: $100 per hour
  • Hardware needed: ~$9,000 a month
  • Physical Therapy consultant needed: $50 per hour
  • Marketing and sales staff: $0 (Founders)

Pilot:

Step 1 – Our initial pilot will focus on collecting data and monitoring the experiences of 100 physical therapy patients with select physical conditions over the course of their 10-14 week rehabilitation treatments. To do this, we will partner with 2-3 physical therapist practices in Chicago. We will track their progress by assessing their performance of 5-7 movements at the beginning of their course of treatment, at weekly intervals, and at the end of treatment.

Step 2 – The video data collected from the 2-3 physical therapist offices will create a baseline distribution of movement norms. This allows our software to understand how patients should move and what effective movements are.

Step 3 – Once the software is smart enough to understand what correct movements are, it will observe patients’ movements, immediately identify physical deficiencies from this movement, and prescribe individualized movements to help the patient recover.

Future:

Our goal is to saturate the Chicago PT market by partnering with Athletico. Athletico has 57 PT locations in Chicago alone. From there we will try to expand into adjacent markets, starting with Miami, which is home to many elderly individuals at risk of injury.

Financial projections:

Sources:

[1] Journal of Orthopedic and Sports Physical Therapy https://www.jospt.org/doi/pdf/10.2519/jospt.1997.25.2.101

[2] The assessment of patient adherence to phsiotherapy rehabilitation https://www.researchgate.net/profile/Sandra_Bassett/publication/284411604_The_assessment_of_patient_adherence_to_physiotherapy_rehabilitation/links/56afc4cb08ae9c1968b48840/The-assessment-of-patient-adherence-to-physiotherapy-rehabilitation.pdf

What don’t patients do their exercises? Understanding non-compliance with physiotherapy in patients with osteoarthritis of the knee http://jech.bmj.com/content/55/2/132

[3] https://www.ibisworld.com/industry-trends/market-research-reports/healthcare-social-assistance/ambulatory-health-care-services/physical-therapists.html

[4] https://seekingalpha.com/article/3975610-opportunity-within-multi-billion-physical-therapy-industry-examining-u-s-physical-therapy

[5] http://www.dartfish.com/

[6] https://venturebeat.com/2017/10/15/bots-are-becoming-highly-skilled-assistants-in-physical-therapy/

[7] https://www.bioniklabs.com/about/overview

[8] https://www.omicsonline.org/open-access/mathematical-modeling-and-evaluation-of-human-motions-in-physicaltherapy-using-mixture-density-neural-networks.php?aid=84401

[9] https://arxiv.org/abs/1802.01489

[10] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2235834/

[11] https://www.ncbi.nlm.nih.gov/pubmed/29283933

[12] https://arxiv.org/ftp/arxiv/papers/1609/1609.07480.pdf

[13] https://phyzio.com

[14] https://pmax.co

[15] https://focusmotion.io

[16] https://www.bls.gov/ooh/healthcare/physical-therapists.htm

[17] https://www.vmware.com/cloud-services/pricing-guide.html

Team Members:

Pavlina Plasilova, Kelly de Klerk, Yuxiao Zhang, Aziz Munir, Megan McDonald

fAIrytale – Choose your own story… $175000 Ask: Louis Ernst, Jessica Goldberg, Andrew Herrera, Pranav Himatsingka, and Anu Mohanachandran

Choose your own story…

Executive Summary

Children’s books provide a portal to creativity and human development. However, even as the importance of positive role models for self-esteem becomes more apparent, many children do not see themselves represented in the stories they read. Using machine learning techniques and natural language processing, we can programmatically generate stories with characters and contexts to make all readers feel represented, empowering them to learn and grow.

We seek to raise $175,000 to create a beta version of the iOS application of our product. This funding will be applied to an application designer, consulting from a data scientist, a children’s linguistics expert, legal support, and access to data in the form of children’s literature. We intend to publish an app in the Apple App Store in 2019.

Our Challenge

Today, over 20% of US households spend more than 25% of their income on their children. Children’s books comprise a significant share of that budget. The proper development of a child’s reading ability starts at day one and has incredible effects on that child’s later ability to succeed in life. The US children’s book publishing industry is big business, to the tune of $166 million profit on $2.3 billion of revenue annually, with ebooks comprising 12% of the industry. Outside the home, annual US public school spending per student on supplies averages about $939 per student, totalling to approximately $47 billion.

However, opportunities remain. A 2018 study found that bias is often present in children’s books. Female characters are grossly underrepresented, or appear as the sidekick. The picture is even worse for people of color: they are represented in just 3% of books. And if your child has special needs, they are almost entirely missing from children’s books. It is important for children’s books to represent the diversity of the “real” world [6]. Exposure to diversity through children’s books can help “normalize” for children what may otherwise be perceived as different. It is also an opportunity for children to see role models in what they read.

Introducing fAIrytale

Children, parents and teachers no longer have to search for stories that represent them. fAIrytale is an app created with the mission to represent, empower and grow young readers and their communities through interactive storytelling. From personalized avatars to customizable storylines and dynamic creative, fAIrytale uses the power of augmented intelligence to make reader-centered stories a reality.   

We envision programmatically populating stories using parameters filled out by a reader. Stories are co-created with the readers based on the child’s development level and selected descriptors. A reader would be able to choose characters’ gender, ethnicity, family structure, special needs, or more.

Initially, we will use a corpus of pre-written stories customized with character names, pronouns, and family dynamics that match those specified by the reader. Using machine learning techniques, fAIrytale fills out the selected story using the parameters selected by the reader. For instance, if a reader indicates that their parents are a same sex couple, fAIrytale will automatically populate the story with two moms and the appropriate gender pronouns. This will accomplish our minimum goal of aiding representation of groups that don’t usually get to see themselves as the heroes in the story.

Once parameters are selected, imagery that aligns with each sentence will populate the screen. This can be accomplished using dynamic creative, a technique already used in digital advertising. Dynamic creative optimization selects the best combination of elements to include in an ad based on data and real-time feedback. A similar technique could be used to create thousands of iterations of imagery that align with text from the story, for instance, penguins eating ice cream at a beach.

Pilot

Today, much of what fAIrytale can be is aspirational. We are confident that any one piece of this plan has the potential to be a successful business. Currently, we are asking for $175,000. With this money, we will work with a mobile application developer to help build out the system. We will also buy time with a data scientist to build out the machine learning system to populate the story, consulting from a linguist to understand what language is most appropriate for children, and legal services to protect any intellectual property. Currently, we have access to the text of one hundred contemporary children’s stories and many more public domain stories through project Gutenberg. Remaining funding will be used to license access to additional stories, thereby increasing the robustness of our data.

After interviewing several parents about our planned offering, we found strong support for our idea. One parent indicated they spend $500/year on books for their child. Another indicated his desire to share his Indian heritage with his child growing up in Chicago.

We envision our target segments to be both families and schools. Parents will have access to a freemium service. The free version will have more limited customization, and based on further customer research, may have a limited number of monthly stories. The full featured version of fAIrytale will allow for more robust customization and unlimited access for a monthly fee. Based on preliminary interviews with parents, we plan on charging $7.99 per month.

fAIrytale will also target school districts, to enable schools to further expand their libraries and engage students. Schools will also have access to the free model, or will be able to license access to the premium version.

Future Considerations

This is only the first challenge in our quest. Ultimately, our goal is AI-generated children’s storytelling.

We will tag our existing database for story, semantic structure and other characteristics. Humans will provide nuance, while algorithmic tagging and word embeddings such as word2vec will provide insight into parts of speech, and sentence structure. From there, we can automate portions of the story generation process. Because children’s stories are generally more simple, the task of algorithmically generating a story is less complex.

Even as the natural language processing capability develops, story curation will still be required. The AI will propose 2-3 suggested next sentences, and a reader will choose the next step. These interactions will also provide data on how readers engage with stories, allowing our algorithms to develop more favorable stories.

The primary candidates to enable this today are generative adversarial networks, long short-term memory networks (a technique for relational neural networks that has proven results for time delayed tasks, such as talking about a character that doesn’t appear in every sentence) and hidden Markov models (especially known for their application in reinforcement learning and temporal pattern recognition such as speech, handwriting, gesture recognition, and part-of-speech tagging). With a working model to generate simple stories combined with research on childhood language development, we can adjust stories to the reading skill of our readers and help them grow.

Beyond the technical, there are also various features and business opportunities we will explore as fAIrytale grows. This includes interactive images, so that children can touch the screen and get a response. Parent interviews revealed a desire to train the AI to mimic a parents’ voice, so that they can still “read” to their child when they are away. The app could enable multiple device logins, so that families can share stories, even at a distance.

Additional opportunities include creating merchandise around recurring characters, or printing favorite stories as physical books. In the classroom, we see additional opportunities to develop lesson plans and learning opportunities through stories.

Challenges & Risks

We strongly believe in this business model, but acknowledge the risks. While the pilot technology is certainly executable, the generation of new stories is a challenge. However, the development of algorithms such as word embedding, generative adversarial networks, and long short-term memory networks are making big strides in the area of natural language processing. By developing our application starting with our pilot, we can take advantage of this technology as it develops.

Additionally, in using existing stories to train generative systems, we risk codifying biases from past works. Word embedding techniques show that the word “girl” is more closely related to “homemaker” than “scientist”. By using these techniques, we will only perpetuate harmful biases. However, work on the subject has shown that it is possible to differentiate between bias and properly gendered words. I.e. “girl” should be considered closer to “waitress” than “waiter”. These techniques of debiasing should work equally well across other biased differences such as ethnicity.

Looking Forward

We believe that fAIrytale holds great promise. The technology to create a minimum viable product is proven, and the technology for many of our aspirational goals is fast improving. Given the extremely positive responses from interviews with parents and school teachers, and the importance of reading to child development, this project holds great potential not only in terms of economic value but in social value as well. We are excited to choose our adventure with fAIrytale, and we hope you select to join us.

CITATIONS:

  1. Rivera, Edward. “OD4394: Children’s Book Publishing Industry Report”. IBISWorld. Dec. 2017. Web. 5/28/2018.
  2. https://www.nytimes.com/2014/03/16/opinion/sunday/where-are-the-people-of-color-in-childrens-books.html?_r=2
  3. Speech, Language, and Swallowing – Development, American Speech-Language-Hearing Association, https://www.asha.org/public/speech/development/34/. Accessed 28 May 2018.
  4. School spending by student https://nces.ed.gov/programs/coe/indicator_cmb.asp
  5. Horning, Kathleen T. Publishing Statistics on Children’s Books about People of Color and First/Native Nations and by People of Color and First/Native Nations Authors and Illustrators, Cooperative Children’s Book Center School of Education, University of Wisconsin-Madison, 22 Feb. 2018, ccbc.education.wisc.edu/books/pcstats.asp. Accessed 29 May 2018.
  6. Epstein, BJ. Kids Need Diversity in Books to Prepare Them for the Real World, Newsweek, 7 Feb. 2017, www.newsweek.com/childrens-books-diversity-ethnicity-world-view-553654. Accessed 29 May 2018
  7. http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf

 

ArtInvest – (Rebel Alliance) – Integrating Blockchain and Machine Learning Analytics in the Alternative Asset Space – $200K

Background – Art Industry Overview  

88% of wealth managers believe arts and collectibles should be offered as part of their wealth management services. In order to understand how art trading fits into the wealth management services we must first understand how the art market works and its key players:

 

  • Art Galleries: represent artists and sell their art, they determine and manipulate price in the primary market
  • Buyers: museums/galleries or high-net worth individuals who collect and sell art for both personal and investment related reasons
  • Auctions houses: provide appraisal experts and venues for selling art to the public
  • Wealth Managers: art buying/selling is increasing in popularity among wealth managers as another investment option for clients

There are two segments in the art market, the elite end (galleries) and low tier (small unknown local galleries outside urban areas where prices are listed and observable). Galleries represent the primary market in the elite end segment, they determine the price (which is hidden from the public) and maintains control of the price by manipulating the secondary market comprised of auction houses who make price public and sell to the highest bidder, and owners who sell their art previously bought from galleries or auction houses.

Online art market

The global online art market reached $5.4B in 2017, accounting for 8% of the value of global sales. Auctions represent 47% of sales and dealers 53%. Most of them surveyed in 2017 recognized the online channel as a key area of growth for the next 5 years.

Source: The Art Market Report 2018 by Art Basel and UBS

Opportunity – Where are there pain points in the current system?

Currently pain points exist in every segment of the art trade value chain:

  1. Authentication & proof of ownership: The lack of a central title agency to track past ownership, as well as the reliance on human judgement, make establishing authenticity difficult and prone to many errors.
  2. Price valuation: The current art trading market is not liquid, and in many cases art is not sold in auction. This makes the price valuation of art highly arbitrary.
  3. Utilization: The current system of ownership is based on one owner utilizing one piece of art, this does not fully make use of the value of the art.
  4. Art investments: Even though current infrastructure exists to enable investment in arts, there is no system to support decision-making and prediction of future returns and risks.

Proposed Solution

Our proposed solution has several integrated components, each of which is comprised of existing technology that is in almost all cases open-source and available for repurposing or is reasonably replicable.

Machine Learning Platform : We see two uses, authentication and predictive analytics. Using tools available via the Google Cloud Platform, our team believes a highly accurate machine-vision model could be trained using image data collected in high definition to create a model that can identify forgeries on a going forward basis for any onboarded Art with near 100% accuracy. A recent paper by Elgammal, Yang, and Den Leew of Rutgers and the Atelier for Restoration & Research of Paintings recently used a RNN in ensemble with a “handcrafted” supervised model to achieve 100% forgery recognition on drawings. Companies Verisart and Dust both currently use machine vision to create a digital token or certificate for such assets now, and could be partnered with if the technology proves too difficult to deploy internally.

With regards to analytics, we see company Artrendex currently uses a dataset of extant artwork to formulate categories for artwork that is useful for trend identification. We believe continuing this form of research into these assets could provide insights into expected returns. Moreover, we feel that the integration of those expectations with Monte Carlo processes and traditional portfolio theory (which are employed in the structured product businesses currently) could liken this asset class to more traditional financial products. In so doing, it could be made more readily understandable and optimizable by Wealth Managers.

Distributed Ledger: Here, the Blockchain element of ArtInvest’s platform plays a role in allowing for liquidity to form for individual art pieces as ownership stakes can easily be traded. This allows for the secondary market to give accurate MTM values for pieces in more frequency than auctions and even allows for derivative products so as to hedge positions. Moreover, the operational benefits of a smartcontract enabled platform would make transaction verification and recording robust. Incorporating the requisite tax and regulatory frameworks becomes easy as it can be embedded directly into the contract logic. Else, Wealth Managers can use their existing infrastructure as an overlay to ensure compliance.

The solution is made scalable by implementing machine learning processes on the cloud and operational processes on a permissioned distributed ledger on which tokenized works can be traded. In each case, this can be done with almost entirely open-source software.  

Benefits/Challenges

Benefits:

  • Transparency – Sales recorded and traceable through the blockchain, ensuring information on past ownership and prices
  • Liquidity – Selling shares of art increases transactions, increasing access to art ownership and accuracy of prices
  • Regulation – Creating a standardized process for art sales can allow for greater guidelines and regulation for the industry as a whole
  • Investment Opportunities – Increased transparency and liquidity will allow Wealth Managers to provide art investment as a trusted and beneficial investment tool in their portfolios

Challenges:

  • Selling physical object on a virtual platform – Sellers can authenticate item and then deliver forgery at the time of sale, leading to potential trust issues with our platform
  • Resistance in the current market – Current market is full of high net worth individuals who prefer anonymity and less regulation, which could prevent these individuals from using our product
  • Lack of support from auction houses and appraisers – Auction houses and appraisers might see our process as replacing their line of business, especially if the need for appraisers decreases with our new authentication tool. We will need these market participants support to ensure our product is funded and trusted by the art selling community

Commercial Promise – What Appropriable Value is There?

ArtInvest’s initial target customer segments are dealers and wealth managers. Revenue will be generated through two main streams: rental fees from leasing out the artwork on the platform and fees from providing market intelligence and data analytics services to wealth managers.

Market opportunity for the art leasing market:

In 2017, the fine art market recorded $63.7 billion worth of sales, from 39 million transactions. Each of these art pieces sold required appraisals and validations at least once, requiring extensive time, effort, and money to sell the pieces. With ArtInvest and its machine-vision technology in identifying forgeries, we plan to help dealers and wealth managers reduce acquisition costs by 50%. We conservatively estimate that capturing 0.5% of overall transaction volumes over five years would bring in annual transaction value of $347 million by Year 5, bringing in more than $9 million in rental fees.

Market opportunity for providing art data analytics services:

In 2017, according to a study by Deloitte 88% of wealth managers said that art and collectibles should be included as a part of wealth management offering. In 2016, $1.6 trillion USD of ultra-high net worth wealth was allocated to arts and collectibles, and this is expected to grow to $2.7 trillion in 2026. We estimate that providing investment management tools to these wealth managers, based on a 2% fee would yield more than $6 million annually.

Given the potential size of the industry (1.7 trillion with $>50 billion annual sales), capturing even a small market share would bring significant incremental revenue to the company, and we foresee that with the network effect, the platform will see exponential growth once a critical amount of artworks are registered.

 

Potential Competition

The competition currently consists of several small ventures, largely based in London, which focus on different elements of the opportunity set. Verisart and Artrendex are startups focusing on art verification and token creation, while Codex and Maecenas are blockchain platforms , as is ArtStaq, our most similar competitor. Currently, each of these companies target the end user or “consumer” but no company is focused on developing a B2B model. ArtInvest would hope to use its technology as a backend service to existing Wealth Management firms to better leverage their expertise and existing client base but providing them a similar service as our competitors do for individuals.

Funding Needs

The estimated cost to build the platform in Year 0 is $1.05 million, the majority of which is the cost of hiring engineers. In the initial 3 months, we aim to build a minimum viable product with the blockchain and machine learning set up and a beta application, which will be tested by selected users in the art industry. The estimated cost to build the MVP is $200,000.

Appendix:

*Revenue includes: Rental fees, fees from investment management, storage fees and insurance fees. Note that both storage fees and insurance fees are expensed and do not contribute to the bottom line.

Sources

Market Size:

https://www.ft.com/content/69addf50-c5d8-11e6-8f29-9445cac8966f

https://www2.deloitte.com/lu/en/pages/art-finance/articles/art-finance-report.html

Cost of Authentication:

https://www.ifar.org/authentication.php

Codex Blockchain White Paper:

https://docsend.com/view/n4sbws8

Artrendex – Computer Vision which Verifies Art by Brush Strokes

https://medium.com/@ahmed_elgammal/picasso-matisse-or-a-fake-a-i-for-attribution-and-autehntication-of-art-at-the-stroke-level-f4ec329c8c26

http://www.artrendex.com/

Machine Learning Technique:

https://arxiv.org/pdf/1711.03536.pdf

Provenance Guide:

https://www.ifar.org/Provenance_Guide.pdf

Current Technology:

https://www.nytimes.com/2009/10/29/business/global/29iht-nwsmart29.html

:https://www.technologyreview.com/s/609524/this-ai-can-spot-art-forgeries-by-looking-at-one-brushstroke/

https://www.artsy.net/article/artsy-editorial-these-four-technologies-may-finally-put-an-end-to-art-forgery

Appraisal Info: http://www.art-care.com/articles/the-professional-art-appraisal-what-to-expect.html

Art Investment Funds:

https://www.cnbc.com/2015/05/29/wealthy-investors-dabble-in-art-investment-funds.html

Tax information:

https://www.irs.gov/pub/irs-utl/annrep2016.pdf

http://www.wealthmanagement.com/high-net-worth/basics-art-valuation

https://www.irs.gov/compliance/appeals/art-appraisal-services

https://itsartlaw.com/2017/10/13/vagaries-of-valuation-for-collections-of-artwork/

Code from GitHub:

https://github.com/ITPeople-Blockchain/auction/blob/v1.1.0/art/artchaincode/art_app.go

 

Dr. Loo || the Mean Squared Terrors || $200k

A thoughtful smart monitor for personalized health over time

 

Problem

Currently, there is no easy way for people to receive a holistic picture of their daily health. There are Fitbits that can track steps and calories burned, at-home monitors that check blood pressure, and thermometers to check body temperature. Besides the yearly checkup with a primary care physician, however, there are very few ways to gain access to a detailed report about our health. Continuous monitoring of a patient’s health is crucial for both preventative care and management of chronic conditions; however, tests that require people to go out of their way to produce samples (ex: the traditional pee-in-a-cup method and daily diabetic blood glucose testing) are cumbersome at best and often very painful, costly, inconvenient, and difficult to scale. Furthermore, for many of these patients with chronic conditions, there are significant challenges in encouraging adherence to medical regimens, which only exacerbates the aforementioned problems.

The diagnostic and medical labs industry currently has annual revenues of $53 billion. Of that, it’s estimated that $8.5 billion is spent annually on urine testing and screening. Additionally, according to one medical study, the average diabetes patient spends nearly $800 per year on supplies for testing their blood glucose levels, and about $2,100 more on insulin prescriptions and associated supplies. Based on the American Diabetes Association’s most recent estimates, 23.1 million Americans have been diagnosed with diabetes, and there are approximately $327 billion spent on diabetes treatment each year. While diabetes represents just one diagnostic case, these numbers point to tremendous upside and opportunity for a new, disruptive solution to make patients’ lives easier and at a more affordable price.

 

Solution + Value Prop

Our solution, Dr. Loo, uses IoT sensors, cloud-hosted big data analytics, and machine learning algorithms to give customers dramatically enhanced insight into their current and future health by leveraging day-to-day urine data. While long-term growth and success in this area may require advances in diagnostic technology, there are a number of compelling use cases that are already feasible which we can use to develop our products, while we continue to perform the research and development necessary to achieve our long-term ambitions.

Our MVP is an add-on cartridge to toilets, which will collect samples of users’ urine to monitor their health. These smart toilet cartridges will contain urinalysis test strips with numerous chemical pads (each representing a different test, outlined below, and changing colors when reacting to compounds present in urine) and an optical camera that will capture the results on these strips. Additionally, a smell sensor connected using IoT will be incorporated into the product to provide even more robust results using the odor of a person’s urine. These results will then be digitized, analyzed, and sent in a daily summary to a user’s downloaded app on their phone. 

The strips, optical camera, and smell sensors will incorporate machine learning technologies by feeding the outcomes through the database to come up with a final analysis for the customer based on careful interpretation of colored chemical pads and smell inputs. This will be available to the customer through an app on their mobile device. Additionally, we will anonymize the collected data and run algorithms on verified data from patients with preexisting conditions to assist in the diagnosis of other customers. This incorporation of crowd-sourcing will enable our solution to become even more accurate as more customers use our product.

Samples of a Mock-Up for Our Prototype

While dipsticks for urinalysis have been on the market for decades, the accuracy of those results are heavily dependent on proper sample preparation, correct interpretation of the color scales, and precise readout timing. Our product presents two valuable propositions: a) the testing process requires little to no change in users’ daily behavior, and b) the strips themselves can contain dozens of different tests, customized based on the users’ health needs.

Specifically, we would like to measure the following types of metrics:

  • Nutrition. Urinalyses can identify a person’s nutritional deficiencies by determining whether a person is under or over the daily recommended range of intake on certain vitamins, fats, sugar, protein, etc.
    • Metabolic Analysis (yeast/fungal, vitamin and cellular energy markers)
    • Amino Acid Levels & Oxidative Stress Analysis
  • Hormone Activity. Urinalyses that detect surges in LH (luteinizing hormone) or the presence of hCG (human chorionic gonadotropin hormone) can help women who want to become mothers: (1) plan for their pregnancy by predicting time of ovulation and peak fertility and (2) confirm their pregnancy.
    • LH Hormone Ovulation Test
    • hCG Pregnancy Test
  • General Health. Urinalysis can be indicative of a person’s overall health. In addition to ensuring a person is well hydrated through the color and cloudiness of the urine, dipstick tests can measure acidity, the presence of blood and specific gravity.
    • pH level. Whereas more acidic urine (i.e. lower pH levels) can be associated with stress, inflammation, dehydration and a high-carb diet, a more alkaline urine pH in the range of 5 to 7 is indicative of “calmer physiology, hormone balance, as well as safer and more successful fat loss”. Higher urine acidity can also hint at acidosis, a condition that can lead to kidney stones or be indicative of existing kidney diseases.
    • Urinary specific gravity (concentration of solutes in urine; provides information on kidney’s ability to concentrate urine).
    • Presence of Red & White Blood Cells.
  • Disease-Specific Risks.
    • Glucose level (for diabetics)
    • Protein level (for kidney disease)
    • Presence of bacteria (for urinary tract infections)
    • Prostate cancer

In addition to diagnostic capabilities, our product can serve a therapeutic objective by syncing results and making them available to a patient’s doctor. For instance, patients with diabetes would find it particularly beneficial to ensure their blood glucose levels are within normal range and be prompted of appropriate times during the day to take insulin shots. Dr. Loo is also convenient, enabling patients who previously had to go through the discomfort of pricking their finger to simply monitoring their glucose levels through a normal, painless activity. Similarly, certain illnesses require a patient to keep the pH of their urine within specific margins to ensure the efficacy of treatment.

Based on the intended metrics and capabilities described above, we’ve identified two target customer segments:

  1. Users who have no existing conditions or physical symptoms. These users are typically in the 25-40 age group and are very health-conscious. They may have a family history of diabetes, high cholesterol and/or other illnesses, and thus are interested in more frequent monitoring of their personal health.
  2. Users with existing conditions. These users are typically in the 41+ age group and as patients of diabetes, kidney disease and/or other conditions, they need to ensure that certain metrics, such as blood glucose level and urine pH, are within a specific range.

 

To further validate the market need and opportunity for Dr. Loo, we conducted a survey via Google Forms. Out of the 58 respondents, 73% want health diagnostics more than once a year. In addition, 57% of the respondents would pay more than $10/month for a solution like Dr. Loo.

Desired Report Frequency

Willingness to Pay (per month)

 

Implementation, Roll-out, & Next Steps

For our first iteration of minimum viable product, we want to sell Dr. Loo directly to a group of end customers. Given Dr. Loo’s ability to help users maintain a healthy track record, as well as detect and prevent the progression of various illnesses, it can be marketed successfully to these customers through healthcare providers and other channels, such as health and fitness magazines and sites. These customers would face an initial cost of $200 for the purchase of the instrument (can be potentially reimbursed through insurance companies) and then a monthly subscription charge of $20 for the mobile app and replaceable cartridges.

For future steps, we want to consider selling to hospitals so that they can install these directly into their toilets and have their patients use them for quicker/more convenient test results. In addition, to make Dr. Loo more accessible to patients and broader user base, we would seek government approval for use of funds from flexible spending accounts and eligibility for reimbursement through insurance companies.

Although our first iteration focuses on urine samples, in the future we will also want to incorporate stool into Dr. Loo’s repertoire so that test results can be even more robust. Samples of stool are able to give more details on conditions that urine can’t analyze. This includes the presence of food poisoning (and gastrointestinal infections), drugs, STDs and other types of diseases. In addition to incorporating stool, we hope to enhance Dr. Loo’s range of usefulness and accuracy by doing the following:

  • Pursue a more granular analysis / detailed results through microscopic exams, gas chromatography/mass spectrometry, etc.
  • Partner with Labcorp and other testing agencies to ensure the latest tests are available
  • Integrate with other fitness applications (i.e. Fitbit)

Another future addition would be to allow for automatic reordering of cartridges based on number of urinalysis strips left in the existing cartridge. This makes it so that users don’t even have to remember when to reorder a new cartridge, paving the way for further automation.

 

Budget, Cost, & Funding

We would like to request $200k to fund Dr. Loo. We are assuming an upfront cost of $200 per unit to customers for the initial equipment (see below for breakout), plus a monthly subscription fee of $20 (for cartridges and the app), growth rate that starts off at 13% and monthly churn rate that starts off at 4% (with rates stabilizing over the months), and costs for R&D, staff, manufacturing, etc. Please see this sheet for financials surrounding growth and cost hypotheses as well as an in-depth model broken out by month for the first 2 years.

Other costs to keep in mind (also built into the model):

  • $20-25k average to build initial iOS and Android apps
  • Free AWS credits are available for startups to handle our analytics and basic client-server cloud-compute app, and if we could apply for an activate partnership with VC’s help, then we should be able to run for our first year without the need to spend anything on computing infrastructure.
  • Shipment costs of $2 per package

The image below is a build-up of our component costs for the physical product that attaches to the consumer’s toilet.

The components alone total to an estimated $139.07 per unit. With an additional 35% premium for manufacturing and shipping, this brings us to a total cost of $187.75. We then factor in a small buffer to assume an upfront cost of $200 per unit.

The screenshot below is tab 1 of our financials sheet and contains a snapshot of the high-level numbers. We hypothesize we will need $182k; however, we want to factor in an 18k buffer, making our total ask $200k.

Risks

We may need to go through FDA approvals for our product since it is a type of medical device. Since our initial product would use existing medical testing procedures and would not be developing them on its own, we do not anticipate needing a high-level type of approval as the risk is very low. Thus if we were to go for approval, we would aim for low-level FDA approval. In the event that the FDA disagrees, we would be able to appeal to be granted permission to do clinical testing with insignificant risk, which requires only IRB approval, or potentially turn to the EU for CE mark approval.

A related risk involves initial medical backing from medical professionals. We will need to ensure we talk to as many doctors to vet any concerns and have them be our biggest advocate in recommending this product to their patients, so that we can ensure greater adoption by end users.

Another risk to keep in mind is the issue of sample dilution. We will be using a significant portion of R&D in order to figure out how to capture the sample. One method is to have a pipette that extends into the toilet bowl and draws up a sample after each bathroom usage. This is easily the least costly method; however, it also means that we would be dealing with diluted samples, as there is already water in the toilet bowl prior to the action. Another method is to come up with some kind of funnel to capture the sample. This would ensure pure samples, however could introduce the risk of bacteria formation, greater manufacturing costs, and minor discomfort of the user.

Lastly, compared to a urinalysis sent directly to labs, dipstick testing lacks precision and is limited in the type of conditions that can be detected. Although the chemical reactions and color changes are reliable in identifying the presence of known abnormalities, dipstick testing does not quantify the seriousness of the abnormalities and their underlying causes. Therefore, in the event that a condition persists, a user would still need to follow up with a healthcare provider for further diagnosis and if necessary, to develop a treatment plan. Nevertheless, dipstick testing is still a much more convenient form of urinalysis that delivers benefits when users have no physical symptoms (and therefore would’ve left a condition undetected) and when patients require ongoing metrics to monitor their illness. As we develop the capability to deliver more granular results (i.e. through stool and microscopic testing), we can mitigate this risk significantly.

 

Competitors

There are several types of competitors we will have to keep in mind as we roll-out Dr. Loo. The first is the existing medical laboratory services industry. This industry is relatively concentrated, with the top two companies accounting for about 30% of the overall industry, and with further consolidation expected to take place in the coming years. There have also been a handful of competing efforts from other companies, such as Toto, to make experimental smart toilets, but none of these have made it to mass-market and because they were expensive and designed as entire toilets rather than sensors. Lastly, there are startups, such as Scanadu and S-There, that are trying to come up with technological solutions for analyzing someone’s health off their urine or feces, however these are still in their growth phase and do not have the robustness or ease that our product can deliver to customers.

 

Team

Siddhant Dube

Eileen Feng

Nathan Stornetta

Tiffany Ho

Christina Xiong

Sources

https://stanmed.stanford.edu/2016fall/the-future-of-health-care-diagnostics.html

http://homeklondike.site/2017/04/22/duravit-launches-a-toilet-that-analyzes-urine-tests-by-itself/

http://www.iotevolutionworld.com/smart-home/articles/434971-iot-will-influence-bathroom-the-future.htm

http://www.diasource-diagnostics.com/var/ftp_diasource/IFO/RAPU02C022.pdf

https://www.livestrong.com/article/165210-what-are-the-causes-of-wbc-rbc-in-urine/

https://drannacabeca.com/products/dr-anna-cabeca-keto-alkaline-weight-loss-solution-urinalysis-test-strips

http://medicaldevicedaily.com/perspectives/2017/05/18/new-ce-mark-rules-make-fda-seem-user-friendly-cardiovascular-devices/

https://gizmodo.com/5119681/totos-intelligence-toilet-ii-smartly-measures-the-temperature-of-your-pee-among-other-things

https://www.pcf.org/news/new-urine-test-for-prostate-cancer-available-unlike-psa-test-is-ultra-specific-for-prostate-cancer/

https://www.scanadu.com/diagnostics.html

https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/default.htm

https://aws.amazon.com/activate/

http://howmuchtomakeanapp.com/estimates/results

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4935544/

https://www.ncbi.nlm.nih.gov/pubmed/22235952

http://clients1.ibisworld.com.proxy.uchicago.edu/reports/us/industry/keystatistics.aspx?entid=1408

http://www.diabetes.org/advocacy/news-events/cost-of-diabetes.html

http://www.diabetes.org/assets/pdfs/basics/cdc-statistics-report-2017.pdf

https://www.pbs.org/newshour/health/urine-screens-cost-8-5-billion-a-year-more-than-the-entire-epa-budget

https://www.wired.com/2015/11/c2sense/

https://www.walgreens.com/topic/faq/questionandanswer.jsp?questionTierId=700020&faqId=1200046

https://www.walgreens.com/topic/faq/questionandanswer.jsp?questionTierId=700020&faqId=1200046)

https://www.healthline.com/health/urine-ph#results

https://www.gdx.net/product/one-fmv-nutritional-test-urine

http://drwebbhealth.com/urinalysis.html

https://emedicine.medscape.com/article/2090711-overview

NLG for Healthcare Billing

Pitch: NLG for Healthcare Billing

 

Opportunity: healthcare bills are positively inscrutable.

 

Medical billing in the healthcare space is widely known to be overly complicated. Both the end consumer and the service provider have to deal with a painful process in order to properly consummate the purchase (and delivery) of healthcare services. For the consumer, the billing and coding process of the medical services they received is completely jargon based and not understandable. On the service provider side, code accuracy (entered by the practitioner’s administrator) is more often than not a key driver of improper payments.

 

An inscrutable medical bill… $165 for what?! (Table 1)

 

The problem of healthcare bill non-payment is massive. According to Reuters, “U.S. hospitals had nearly $36 billion in uncompensated care costs in 2015, according to the industry’s largest trade group, a figure that is largely made up of unpaid patient bills.” And, “the largest publicly-traded hospital chain, HCA Holdings Inc, reported in the fourth quarter of 2016 that its ratio of bad debt to gross revenues of more than $11 billion was 7.5 percent.”

 

The broader medical billing outsourcing market is projected to reach $16.9 billion by 2021. According to the Centers for Medicare & Medicaid Services (CMS), errors resulted in $36.21 billion in improper payments in FY2017. The cost to both parties is not only frustration, but also a negative patient experience which strains the long term care relationship between the patient and healthcare services provider.

 

Solution: NLG to produce patient-friendly bills

 

We can improve the accuracy and efficiency of the healthcare billing and coding process, for both patients and service providers, by leveraging both natural language processing (NLP) and natural language generation (NLG) technology.

For service providers, we will leverage NLP to deliver automated medical coding. We can train our algorithm on large datasets of medical terminology and automate the coding process by analyzing physician documentation from the text of clinical records and using this information to automatically identify the correct billing codes.

For consumers, we will leverage NLG to clarify billing for patients. In practice, the NLG technology would turn the same billing codes explained in the NLP strategy above into natural language, with clear and concise explanations for patients about charges and their diagnosis. By making the billing more transparent we will not only make the billing process better for the patient. This more comprehensible but we will also to introduce a more impactful sense of trust in the healthcare billing process. This trust will

Patients feel better about what they are paying for and service providers gain clarity and efficiency in the billing process.

MVP Development: develop and iterate in the field

 

To develop and test an initial minimum viable product, we must first partner with a healthcare services company. We have spoken to a family friend that operates three different outpatient centers in the Southern California area, and he is excited about the opportunity to test this concept at his locations.

 

Our partner has been operating the outpatient centers as a family-owned business for thirty years. From our initial diligence, it is clear that the company’s data trove is both large and nearly all of it is on physical paper. Prior to building our own technology, we will use off-the-shelf versions that can be deployed after purchasing a software license to prove whether or not, we can successfully generate the value that we believe our concept is capable of producing. There are a number of companies that offer both NLP text extraction and understanding applications and NLG text generation applications. These include: 3M, A2iA, EMscribe, and Popul8.

 

We think that deploying these technologies will help us better understand where they are most effective and where they break down. This knowledge will drive the development of our technology and its application to our partner’s outpatient locations.

 

Commercial Viability

 

The recent shift in the way that healthcare services companies are measured has placed a spotlight on the quality of service and has driven services companies to focus on measured impact like throughput. As a result, there has been a steady decline in inpatient admissions (and inpatient days) in community hospitals and a simultaneous increase in outpatient visits (over 600+ million) and visits per thousand persons (over 2,000). The percent share of market between inpatient and outpatient care is currently at ~35% and ~65%, respectively.

 

These market trends signal an important opportunity for our concept. Higher turnover within healthcare services centers further strains the billing system, increasing negative patient experiences and putting downward pressure on billing efficiency for services companies. The need for our product could not be greater.

 

In terms of competition, as previously stated, there are a number of companies building similar products for the largest healthcare services institutions in the U.S. These large healthcare institutions not only have a larger budget, but they attract an extremely large patient base and want the security of a larger technology provider. Our opportunity is in the long tail, where we will target the small healthcare services companies with a software solution that is well within their scope of service. By building volume, regionally, we will both amass scale, and become an attractive M&A target to larger scale technology solution providers.

 

Appendix:

https://www.beckersasc.com/asc-coding-billing-and-collections/medical-billing-outsourcing-market-to-total-16-9b-by-2024-6-highlights.html

https://www.techemergence.com/artificial-intelligence-medical-billing-coding/

http://www.a2ialab.com/doku.php?id=rimes_database:start

https://automatedinsights.com/blog/natural-language-generation-101

https://www.aha.org/system/files/research/reports/tw/chartbook/2016/2016chartbook.pdf

https://www.reuters.com/article/us-usa-healthcare-hospital-payments/ballooning-bills-more-u-s-hospitals-pushing-patients-to-pay-before-care-idUSKBN17F1CM

Amper: AI Music Composer

 

Opportunity

Similar to other creative arts like television and film, music is at a very early stage of incorporating AI / machine learning-created content. Current AI / ML applications have been primarily limited to streaming services, where companies like Spotify will, among other techniques, use AI / ML to analyze actual song content to find related music.  One of the primary barriers to entry for companies looking to create content through AI / ML is that such content may lack the appropriate “star power” (ie, brand) to ensure widespread adoption: mainstream artists will likely not be the ones to source material from AI / ML, nor can AI / ML-composed music stand on its own outside of a few select genres.


However, when looked at as a collaborative tool, AI / ML composition could provide a royalty-free source of creative input for artists, yielding new music in a crowded marketplace. Moreover, certain genres will be very conducive to AI / ML-created material, such as electronic dance music (EDM). Lastly, AI / ML-sourced music can decrease costs in music applications where a brand is of little or no importance, namely movie soundtracks and advertising music.

 

Solution

Amper Music is a cloud-based, supervised AI / ML music creation tool that allows musicians and composers to develop music through a variety of inputs. More specifically, musicians define the objective function that the creation tool will solve for by specifying parameters like mood, style, instrumentation, tempo, and song length. The software will produce content through AI / ML, at which point artists can modify the AI / ML generated content to produce unique, royalty-free content. Amper can also be accessed through existing Adobe software as a downloadable panel, further reducing artists’ switching costs.

 

Commercial Promise

The global music industry is clearly a large market, totaling nearly $50B in revenue in 2017 and projected to grow to almost $60B by 2022. Amper has successfully navigated several stages of VC funding and, as of March 2018, has a valuation of $30M, so investors see a promising product.  It is clear that there is a market for the type of music Amper is generating. Developing new music to accompany a project in the traditional manner, or even licensing it, is a more costly option than that which Amper offers. The process of generating music for a project with Amper is very simple and easily customized to suit the content while still very low cost.  

 

The primary threat to Amper is the relatively low financial barrier to entry.  If the market were large enough to garner major interest from the business world, then financial barriers to entry would be non-prohibitive. That said, there is an unknown amount of time required to refine the ML algorithms suitable for content creation, which may contribute to a higher barrier to entry.  Many researchers and even other companies have experimented with this type of music generation to varying degrees of success. In fact, a company called Jukedeck tried this very business model several years earlier and has made little progress. One clear advantage that Amper has is a point of entry into a creative marketplace.  Amper has a developed a “panel,” which Adobe Premier users can download to easily incorporate Amper into their projects.


Alterations

While AI / ML-created content has nearly universal applicability within the music industry in terms of output genre, in the early stages it may prove more beneficial for Amper to specialize in a given application or genre, even if it makes its software available to other applications. Amper could specialize in genres like EDM, where artists are the most likely to adopt Amper Music’s tool as a creative partner, which would allow Amper to further refine its creative algorithms and gain traction with notable EDM artists. At that point, Amper could use the success from this venture as a launch point for new genres. While simultaneously building credibility within the music industry through EDM, Amper should also prioritize artist-agnostic applications like advertising in order to provide reliable sources of revenue. Once sufficiently established within the music industry, Amper could round out its offerings as a complete music collaboration tool by partnering with Narrative Science or another natural language processor to develop lyrical content to pair with the audio.

 

Sources

Amper Music

https://www.ampermusic.com

 

Musical Artificial Intelligence – 6 Applications of AI for Audio

https://www.techemergence.com/musical-artificial-intelligence-6-applications-of-ai-for-audio/

 

Global Music Industry Revenue 2012-2022

https://www.statista.com/statistics/259979/global-music-industry-revenue/

 

Audio Synthesis at Jukedeck

https://research.jukedeck.com/audio-synthesis-at-jukedeck-788239d65494

 

Jukedeck Financial Statements

https://beta.companieshouse.gov.uk/company/07953149/filing-history

 

How Does Spotify Know You So Well?

https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe

 

Pitchbook: Amper Music

https://my.pitchbook.com/profile/122486-41/company/profile#timeline

 

Pitchbook: Jukedeck

https://my.pitchbook.com/profile/60158-44/company/profile#timeline

 

Team Members

Thomas DeSouza, Matthew Nadherny, Patrick Rice, Samuel Spletzer