The Next Rembrandt – Recreating the Great Master

The Challenge: Can We Bring a Master Artist Back to Life?

Rembrandt Van Rijn (1606-1669) was one of the greatest visual artists ever, and certainly the most important Dutch artist.  His empathy for the human condition set him apart – his work focused predominantly on portraiture and the spiritual, and compared to previous artists, he was unmatched in his ability to capture his subjects’ emotions through subtle facial cues.

Dutch bank ING had a simple but powerful question – can the great master be brought back to life, to create a new painting?  Enlisting partners such as Microsoft and TU Delft, ING funded a team of machine learning scientists, software engineers, and art historians to attempt the impossible.

A Masterpiece is Born: The Next Rembrandt Conceived and Executed

The team began with a huge set of raw data – 150GB of 346 Rembrandt paintings. Their first pass was with a deep-learning algorithm to upscale some of the images, maximizing resolution and quality.  Next, they used a separate algorithm to determine Rembrandt’s most common subjects, partitioned by factors such as age, gender, and even head-facing direction.  Using that analysis, the researchers determined that the final painting should be a portrait of a Caucasian male with facial hair, 30-40 years old, wearing black clothes with a white collar and a hat, facing to the right.

To construct the painting itself, the team first used its training set of paintings to construct new facial features – a representative pair of eyes, a nose, a mouth, etc.  Using a facial recognition algorithm, the team then determined the usual proportionality of these features in Rembrandt’s other subjects, allowing the individual constructed elements to be placed in relation to one another on the face.  In this step, the researchers also rendered light and shadow, since this “spotlight effect” was a principle element of Rembrandt’s work. In addition to the feature engineering that dominated the efforts described above, the researchers moved beyond the 2D plane.  In order to capture Rembrandt’s textures and brushstrokes, the team analyzed a handful of Rembrandt paintings with a 3D scanner, to construct a highly detailed height map of the paintings. Using a 3D-printer, the researchers were able to print the final painting with 13 layers of ink, one on top of the other, utilizing the height map to determine texture and the previous 2D models to determine form.

The Next Rembrandt

Looking to the Future: Modifications and Commercial Applications

While popular press generally received the painting positively (it won a few advertising awards), focusing on the technological aspects of its creation, reactions overall (especially from experts) were mixed.  Some claimed that the features and colors are all wrong, while other critics saw it as an opportunity to learn new things about an artist who has been otherwise so closely studied by traditional methods.  Still others were disconcerted about what the advancement might mean for humanity’s place in creating art. We see the project as a great opportunity for people to engage more deeply with machines to create masterpieces.

Although the technological piece used only existing algorithms and approaches, the project had clear value in combining them in a unique way to do something that had never even been tried before – produce a brand new painting representative of an existing artist using only (or primarily) machines.  As such, we are confident of the approach’s commercial value, especially to museums that are interested in the intersection of technology and art.  One can easily imagine a successful tour of a handful of such paintings, with additional works representative of other masters – Picasso, Monet, O’Keefe, etc.  Or museums and art historians might use the technology to better explore and understand artists, getting a sense for those characteristics and details that span across a collection. These sorts of paintings could also be sold to less wealthy collectors – having a “Rembrandt” in your home might be a fascinating way to show an appreciation of the arts at a fraction of the cost.

Nonetheless, we believe the approach can be pushed in a number of directions.  In a coarse sense, we would want to see additional features, and more creativity over translation.  Our principle concerns though stem from the enormous manpower still involved – 18 months for the creation of this painting.  The more automation, and the less handcrafted aspects of the process, the more feasible the technology on a commercial scale and application to multiple artists.  Finally, it remains unclear the degree to which such a process could produce multiple unique works from the same artist inputs, an undoubtedly central question. Many of these changes try to address one the main critiques of this project – that it was more a human-chosen average painting upgraded by machines instead of a true computer-generated Rembrandt.

Sources:

  1. https://www.nextrembrandt.com/
  2. https://news.microsoft.com/europe/features/next-rembrandt/
  3. http://www.adweek.com/brand-marketing/inside-next-rembrandt-how-jwt-got-computer-paint-old-master-172257/
  4. https://www.jwt.com/en/amsterdam/work/thenextrembrandt/
  5. https://www.engadget.com/2016/04/06/the-next-rembrandt-3d-printed-painting/
  6. http://www.adweek.com/brand-marketing/jwts-next-rembrandt-wins-two-grand-prix-cannes-cyber-and-creative-data-172171/
  7. http://www.thedrum.com/news/2016/06/24/verdict-why-next-rembrandt-sets-new-standards-creative-data

 

 

Science Ease – Making Science more Accessible

The Problem: Science Research and Scientific Journals Are Inaccessible

Science Ease is a crowdsourced platform that aims to translate and aggregate scientific research, making it more accessible and more likely to be implemented by practitioner and policymaker alike.

If innovation is the engine that drives economic growth, basic science is the fuel.  In the U.S. alone, government agencies annually spend ~$130B on science funding. However, despite its centrality to the economy and the huge resources invested in it, research too often sits on the shelves for reasons of accessibility. The typical layman with limited knowledge of research design and jargon gains little from reading them. Unfortunately, only weak and informal mechanisms exist to turn new knowledge into practical gain.

The problem is especially acute where the public value of the research far exceeds the private, appropriable value.  To illustrate: suppose a Booth professor came up with a new auction design that, if implemented by Amazon, would be worth $400M.  There is little doubt that somehow, whether via the professor’s private consulting firm or media reports, Amazon would manage to find the idea and implement it.  In contrast, take an idea like Booth professor Eric Budish’s discrete-time fix for high frequency trading. It has high social returns—ones that are highly distributed—and only a few firms (those specializing in HFT) suffer from it.  Scientific findings like these, where the benefits are mostly to those who would not read the research, can be lost in translation.

The Solution: A Crowdsourced Platform for Translating and Sharing Research

Science Ease is designed to alleviate these issues, through the power of the crowd. Science Ease will serve as a centralized resource through which scientists, innovators, and everyday people will interact to translate state-of-the-art scientific findings into language that everyone can understand. It will operate on a non-profit model, funding itself primarily through donations and running at a low cost. Instead of an economic profit, the site will aim to raise awareness about cutting-edge scientific findings for average, everyday people.

The platform operates on a Wiki 2.0 model, using editors at the top of the stack, responsible for pushing publishable content to the public site, and contributors below, constantly working on the backend to refine the content in an iterative fashion.  This Editor/Contributor structure is critical in ensuring that the quality is implementation-ready and to avoid things like political fights over the minimum wage (as would happen in a Wiki 1.0 model).  Moreover, unlike traditional handbooks or meta-research articles, using the crowd allows the tone to step away from the jargon that dominates academia while also ensuring that the content can be updated real-time, rather than on a publisher’s schedule.

To attract contributors, the site will begin by partnering with scientists who are looking to increase the profile of their work (particularly research whose findings would benefit the general public). We would then work with their teams (and potentially hire some initial contributors) to translate that research into easy-to-understand, interesting web pages. Once enough publically beneficial research is made accessible, we would focus on attracting users through press outreach, partnerships with policy-makers, and potentially advertising. Once enough readers and contributors joined the site, a critical threshold would be reached where the most excited readers will become contributors, and the quality would improve, attracting additional readers.

Demonstration: Turn a Publicly Beneficial Discovery into Action

There are two ways we can demonstrate the product. To convince investors or critical decision makers, we would do a simple experiment: show them sections of text from research articles as well as some from Science Ease (without identifying which is which). We would then ask them to evaluate how well they understand the texts and how important they are. These results would show just how powerful Science Ease can be, both from the perspective of making things accessible and showing how research can be important for everyone.

In order to test the broad efficacy of this platform, we will experiment first with just a single discrete idea – Budish’s high-frequency trading fix, mentioned above.  The solution, to be undertaken by a trading exchange like NASDAQ or BATS, is already outlined in Budish’s work.  Therefore, this experiment will test our central theory that the main barriers to the idea taking hold in reality are the barriers to understanding it, and by extension the current inability of anyone but Budish himself to advocate for the idea.  Over a six-month period, with Eric and two of his colleagues acting as Editors, we will invite Contributors from around the world to contribute to a centralized article summarizing Eric’s research, and outlining the specific policy and practice steps necessary to implement it.  Should our theories hold, the outcome to be measured is whether indeed any trading exchanges implement the idea.