The Terminators: Amazon Go Profile

With the revolutionary Amazon Go concept, Amazon flipped the traditional grocery store model on its head, enabling a shopping experience without check-out lines. Amazon identified a consumer need in simplifying the often frustrating shopping experience by eliminating arguably the most annoying part: waiting in line. At Amazon Go, consumers simply choose which items they want and walk out of the store. To come up with this solution, Amazon leveraged its immense trove of consumer data, and combined it with innovative computer vision and deep machine learning. However, with the grocery industry’s low margins of between 1%-3%, Amazon certainly isn’t targeting this sector for profitability alone. Amazon Go is part of a larger organizational strategy to grow its payments business, expand into physical retail, and ultimately drive more traffic to the larger Amazon.com marketplace.

The science behind Amazon Go involves a combination of sensors and technologies, including (i) computer vision to see what people are looking at and what they’re picking up, (ii) sensors on the shelves, (iii) a mobile app that identifies individuals and ties them to their Amazon accounts, (iv) QR codes scanned upon entering to track identity, and (iv) advanced RFID technology to verify when items have been picked up off of a shelf. After a customer scans her smartphone into an Amazon Go-enabled supermarket, every step she takes, and every item she picks up, puts back, or purchases, will generate data. By learning customers’ purchasing habits, stores will be optimized to stock the most relevant products to them, offer relevant discounts, and even notify them when their milk is expiring via Amazon Echo. Eventually, all of these features will come together as an augmented, individualized, and seamless experience. While Amazon Go involves multiple sensors, layers, and technologies coordinating and communicating with one another, all of this remains behind the scenes for the consumer. To see the magic for yourselves, watch Amazon’s demo video here. This technology could mean the Internet of Food. Since Amazon owns the overall ecosystem, from Amazon.com to Amazon Echo, and from Amazon Payments to physical stores, Amazon Go provides Amazon with a positive commercial outlook.

While Amazon may be the first to implement these technologies in the real world, it was not the first to imagine a future in which technology could enable this type of experience. IBM predicted a jarringly-similar concept over ten years ago (watch here), from which Amazon likely drew inspiration for this final store concept. Going beyond the cashier-less checkout process, there are also several retail tech companies that recognize the larger trend and importance of tracking consumer data in-store. RetailNext, Euclid, Brickstream, Nomi, WirelessWerx, Mexia Interactive, and ShopperTrak are just a handful of services that provide brick-and-mortar stores with analytics akin to website traffic reports. By tracking movement within stores, they help retailers better understand how to optimize their layouts, staff their registers, attract returning customers, and more. However, because Amazon was first to market in bringing an experience of this level to life, it has earned itself a significant first-mover advantage, it has created a strong competitive advantage through the incorporation into its larger ecosystem.

Despite its strong strategic positioning, Amazon Go faces several challenges, including (i) the reluctance of customers to be tied to the Amazon mobile app and Amazon Payments, (ii) the need to address frequent misplacement of items caused by grocery shoppers, and (iii) items priced by weight. Potential ways that Amazon may be able to address these issues include: (i) licensing the underlying technology to non-Amazon grocers in exchange for a licensing fee and acceptance of Amazon Payments at their stores, (ii) sensors being programmed to alert employees when items are misplaced (perhaps with a subtle colored light that shines underneath the item), and (iii) having built-in food scales for shelves that include fresh produce, auto-adjusting digital price tags based on weight.  

All in all, Amazon Go is a huge step forward in innovation. We are excited by the technological advances and possibilities that are still to come as Amazon perfects its Amazon Go concept, and as other players within the larger tech space find ways to apply the underlying technology behind Amazon Go towards innovations in other fields.

By The Terminators (Maayan Aharon, Aanchal Bindal, Aditya Bindal, Youngeun Kim, Eran Lewis, Angela Lin)

Profile: Exascale Computing

Supercomputers are extremely fast and powerful computers with advanced processing capabilities. Current supercomputers operate at a scale of a quadrillion (10 to the 15th), meaning the computer performs 10,000,000,000,000,000 arithmetic operations with real numbers per second*. To put this in context, this is more than one hundred thousand times that of your laptop.

With this power, supercomputers are used extensively in the advanced of modern science, modeling and simulating complex issues. This field of science is known as High Performance Computing (HPC) and applications are wide-ranging from medical research to the demystification of elements generated from the big bang to fluid dynamics. 

Supercomputers run off of silicon chips. For a bunch of years, innovation meant these chips kept getting faster and faster and accordingly, supercomputers kept getting faster and faster. Recently, however, the technology in these silicon chips has plateaued and innovation has been minimal*. Additionally, the heat signature generated and related high energy costs is a major limiting factor in supercomputer innovation. These computers run so hot that advanced cooling techniques are required. Hosting a supercomputer requires steady power supply as an outage will cause the processor to melt.

A joint program called the “Exascale Computing Project” (ECP) lead by the Department of Energy (DOE) was launched earlier this year to develop the next-generation of exascale supercomputers that will bring processing up to a quintillion (10 to the 18th) calculations per second. The ECP is a 7-year project estimated to cost between $3.5B-$5.7B and is a joint effort between several US governmental agencies including the Department of Defense, NASA, the FBI, the National Science Foundation and six National Laboratories such as the local Argonne National Laboratory. Recent innovations in technology, particularly in cell phone tech and graphics cards (cards (which can process a lot of data at a much lower heat signature than a silicon chip), opened up the potential for exascale computing. In addition to advancing the hardware, ECP also needs to innovate the software used to run them and  increase energy efficiency because current consumption rate is prohibitively expensive. This increase in processing power will generate more accurate simulations and lead to unprecedented scientific breakthroughs. ECP identifies six pillar areas of application: national security, energy security, economic security, scientific discovery, healthcare, and earth systems. 

Examples include nuclear stockpile steward ship, development of energy efficient engines run on biofuels, seismic hazard risk assessment, accelerated cancer research, and dark matter studies*. In addition to scientific applications, the development of exascale computing has geopolitical implications. The US has historically been the leader in HPC, however, China has had the fastest supercomputer globally since 2013 and top two since 2016. Furthermore, China has been outspending the US on exascale development and international competition continues to scale. The ECP is in part a response to this and has the explicit goals of keeping the U.S. at the forefront of technological innovation and thus sustaining economic competitiveness.

*Primary Source: Interview with John Bell, Director of Center for Computational Science and Engineering at Lawrence Berkeley National Laboratory

By: Gilad Andorn, Renee Bell, John Brennan, and Lauren Kramer (Group Name: Computers)

The future of music

Anxiety disorders such as depression affects nearly 40 million adults in the United States, causing nearly $42 billion in damages annually.  $23 billion alone come from recurring health care services, such as psychological therapy, hospitalization, etc. While these solutions have been critical in solving many cases of depression; however, there are still many cases of anxiety disorder that are either not diagnosed or untreated to this day. There are several obstructions that cause this. For uninsured individuals, psychological therapy can be unaffordable, out of reach, and remain untreated.

Within this 40 million adults, there are estimated 10 million of whom are also musicians, who once played music at some time in their lives. Recent studies have suggested the benefits of playing music in order to resolve psychological issues such as stress, depression, and anxiety. We are creating a solution that not only identifies latent or untreated cases of anxiety disorder, but also creates a music therapy that uses real-time data.

This is where MUSIPY comes.

MUSIPY is an online platform that takes health wearable data to identify indicators for anxiety disorder and prescribes music therapy solutions to help them overcome their disorders. MUSIPY is a new age solution that provides music therapy for individuals suffering from anxiety disorder. MUSIPY is a new innovative solution that relies on sensors of two main vehicles. First, it relies on traditional physical monitors that track neurological stimulation in the brain that can find indicators of anxiety disorder. For example, consistent irregularities in sleeping patterns can be a leading indicator of anxiety disorders such as depression. Moreover, many devices in the market today such as the Apple Watch, Fitbit Flex, and other health wearables. On the MUSIPY platform, this data will be used to identify what neurological symptoms are available. This is only the first step.

We then pair this data with MUSIPY sensors that can easily be embedded into any live instrument. These sensors can not only track in real-time how patients are playing music and gauge what their physiological responses are to playing specific notes, but can also recommend songs for individuals to play that are optimized to evoke certain feelings. This is the latest and greatest in data analytics through a patented four step process.

Step 1: Takes real-time data from health wearables and categorizes physiological responses into various conditions.

Step 2: The MUSIPY platform actively takes this data to prescribe certain illnesses in order to treat them.

Step 3: Deploys musical instruments with sensors, in which musical therapists, can recommend specific songs for patients to play along with.

Step 4: As the patient plays, this data is transmitted to the platform and therapists can create playlists to provide music therapy.

Step 5: Over time, the patient continues to play different songs and melodies that enables them to confront their physiological challenges with real-time musical expression.

In conclusion, MUSIPY is the real-time platform that uses data to identify untreated anxiety disorders and prescribes real-time music therapy in order to help patients overcome their anxiety. We believe this will not only improve outcomes for certain individuals, but also widen access to critical treatment that are currently either unattainable or ineffective.

 

 

 

 

Shallow Blue – Nikon Profile

 

Nikon Retinal Imaging – Solution Profile

  • Will Thoreson-Green (Student ID: 12148843)
  • Curt Ginder (Student ID:440345)
  • Holly Tu (Student ID: 12137544)
  • Tom Kozlowski (Student ID: 10452411)
  • Ram Nayak (Student ID: 12131499)

We pledge our honor that we have not violated the Honor Code during this assignment.

  1. Outline of the Problem

Diabetic retinopathy is a disease due to diabetes that causes damage to the retina. It is the leading cause of blindness among Americans ages 20-64, affecting 40-50% of people with diabetes and accounting for 12% of all new cases of blindness in the United States. Diabetic retinopathy often does not have early warning signs, but the disease can be detected early on by the presence of microaneurysms in the eye. If new cases are detected early, at least 90% of cases could be reduced by proper monitoring and treatment. Diabetic macular edema is a complication of diabetic retinopathy due to fluid buildup, affecting 10% of people with diabetes. Early detection of these two diseases could lead to prevention of significant vision loss and blindness.

2. Nature of the Solution

Nikon and Google’s Verily Life Sciences have partnered to develop a machine learning-enabled retinal imaging solution that will allow for earlier detection of diabetic retinopathy and diabetic macular edema. The underlying technology is Nikon’s ultra-widefield high resolution digital images that capture approximately 82% of the retina. Verily is working on developing a machine learning algorithm that can then read these images and detect early signs of retinopathy in patients with diabetes. They are most likely using the recently acquired data on over one million eye scans from the National Health Service (NHS) to help build this algorithm.1 Interestingly, Kaggle had launched a competition back in 2015 with this exact objective, and the winning algorithm had a 10% higher “agreement rate” than the human-only approach (indicating that the algorithm and a single human expert agreed on a diagnosis more often than two human experts did).2

3. Evaluation of Effectiveness

According to a peer-reviewed article, the Verily system performed at a high level of sensitivity and specificity (97.5% and 93.4%, respectively) compared to the “gold-standard,” as determined by majority decision of a panel of expert ophthalmologists.4 These results were duplicated in an entirely separate dataset. While the results of the technology are promising, the chance of competition replicating the technology is great given the publicly available dataset of retina images that can be used to train the algorithms. Additionally, there are some outstanding concerns regarding the generalizability of the technology to images outside of the datasets used for training and validation. Further testing in actual clinical settings is a necessary step prior to a mass scale implementation.

4. Proposed Alterations

The addition of additional physiologic and early pathologic variables might improve the accuracy of the solution. For example, laboratory values of other end organ damage (microalbinuria, creatinine kinase, HbA1c) could augment the visual findings from the retina imaging.

Additionally, the visual ophthalmologic findings could be applied to early detection of other diseases, such as hypertension, that are not typically evaluated through retinal imaging. Augmented with a medical professional’s judgement of other risk factors, this combined approach might improve targeted preventative treatment for a variety of diseases that are otherwise difficult to predict.

The application of the algorithm in an appropriate clinical context could ensure an improvement in the detection and treatment of diabetic retinopathy, especially in resource constrained settings. Currently, the American Diabetes Association recommends patients with diabetes have an annual examination by an ophthalmologist.5 For areas without access to ophthalmologists, this solution could improve the diagnostic capabilities of a mid-level or general practice provider that lacks specialized training.

 

Sources:

  1. Hodsden, Suzanne. 2017. “Nikon, Verily Partnership Combines Machine-Learning With Advanced Retinal Imaging.” Med Device Online. https://www.meddeviceonline.com/doc/nikon-verily-partnership-combines-machine-learning-with-advanced-retinal-imaging-0001
  2. Farr, Christina. 2015. “This Robo Eye Doctor May Help Patients With Diabetes Keep Sight.” KGED Science. https://ww2.kqed.org/futureofyou/2015/08/20/this-robo-eye-doctor-may-help-patients-with-diabetes-keep-sight/
  3. All figures taken from https://en.wikipedia.org/wiki/Diabetic_retinopathy (all scholarly articles cited in wikipedia) and https://news.fastcompany.com/verily-and-nikon-will-develop-machine-learning-tools-to-screen-for-vision-loss-4027884
  4. Varun Gulshan, PhD; Lily Peng, MD, PhD; Marc Coram, PhD1; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photograph. Journal of the American Medical Association. http://jamanetwork.com/journals/jama/article-abstract/2588762
  5. David K McCulloch, MD, et al. Diabetic retinopathy: Screening. UpToDate. https://www.uptodate.com/contents/diabetic-retinopathy-screening