Skip to content

Instantly share code, notes, and snippets.

@yeehaa123
Last active December 18, 2015 01:18
Show Gist options
  • Save yeehaa123/5702533 to your computer and use it in GitHub Desktop.
Save yeehaa123/5702533 to your computer and use it in GitHub Desktop.

Zou je voor ons jullie eindproduct, de interfaces, een stuk concreter kunnen maken? We hebben nu nog niet geheel een idee wat we ons daarbij voor moeten stellen.

Central to this project is the inherent paradox that the physical collection of the Rijksmuseum is marked by: many of its masterpieces can only be fully experienced through gestures that are forbidden in a traditional museum context. Digital media offer a way out of this paradox by setting up a virtual space where these forbidden gestures are in fact allowed. More often than not, however, the resulting digital environments remain detached from the physical exhibition. They offer an autonomous experience that is best enjoyed in the comfort of a visitor's home rather than the museum itself.

In this project, we want to develop interfaces for the Rijksmuseum that connect the digital to the physical collection by enabling several (and ideally all) of the following actions:

Digging

The visitor should go beyond the displayed collection and enter the archives.

Because this first gesture seems so obvious, it is easy to overlook both its importance and complexity. The total collection of the Rijksmuseum consist of 1,000,000 pieces, while merely 8,000 of these works are on public display. In order to make a larger number of artworks available to the public, simply digitizing the archive is not enough however; interfaces are needed.

For that reason, the Rijksmuseum has developed a API (Application Programming Interface.) that currently makes 130,000 images and over 1,000,000 data entries publicly available. Unfortunately, most users are not equipped to access the collection programmatically though. In order to compensate for this lack, the Rijksmuseum has collaborated Kiss the Frog, Fabrique, and Q42 to create some award-winning user-interfaces (Rijksstudio, mobile app). In this project, we want to develop additional (features of) interfaces to make the otherwise hidden collection available.

Dissecting

The visitor should tear the masterpieces apart, peel off their layers, and destroy their aura.

In a traditional museum context a masterpiece is presented as a completed product rather than a fragmented process. However, it is often its details and (past and future) trajectory that define a masterpiece. For that reason, it is important to show the visitor the different pieces and layers that an artwork consist of.

Moreover, the aforementioned API also enables a new way of dissecting artworks. Apart from layers and pieces, they can also be dissected into bits. Data can be used not only to reconstruct the original artwork, but also to place them in different contexts and narratives, such as:

  • a history of its reception
  • similarities and differences with other works of the same artist or contemporaries
  • the spatial and conceptual itinerary of an artwork

Shailoh Phillips' earlier, award-winning project (Go van Gogh)[http://www.govangogh.org] can be seen as a prototype for the approach and interfaces proposed in this project.

Rijksstudio and the mobile app already present some these dimensions of the artworks to the user. They allow users to zoom in on high-resolution images, peel off the surface to discover hidden, earlier version or completely different paintings, and to search for stylistic and affective metadata (color, mood, etc.). Today, however, these interfaces are not fully integrated in the museum visitor's experience.

In this project we will develop interfaces that not only reveal the small details and hidden layers of the masterpieces, but integrate them in the physical collection. On the output side, this means the following:

  • mediation of both small and large screens to augment the masterpieces

On the input side, we are thinking of the following interventions:

  • Using body sensors to determine a user's affective response to individual artworks or the collection as a whole.
  • Using light, temperature, and sonic sensors to determine

Together these two interventions make it possibles to bypass some forms of metadata altogether.

Layering

We want the visitor to add layers to the artwork that add context or content.

Dissemination

We want the visitor to take the (augmented) masterpieces home or bring them elsewhere.

The interfaces that we want to develop are aimed at transforming the visitor from a consumer into a researcher. However, one of the most important aspects of doing research is to share the results and discuss them with others. For that reason, the Rijksmuseum already publishes all of the data available via their API through a Creative Commons License.

The legal dimension of dissemination, however, is only part of the story. In order to further stimulate knowledge production we want to ensure that the output data produced by our interfaces can be used as input for further research. This means that in the future the API should not only cover the 'original data' but also the new data that is added by the visitor/researcher.

NOTES

  • een interactieve webapplicatie voor opvragen van locatiespecifieke informatie over de opstelling in het museum, vanuit de beschikbare digitale database over de gehele collectie
  • datavisualisatie van werken die context bieden aan fysieke objecten.
  • Verschillende soorten augmentation testen en in kaart brengen: context, referentie, vergelijken met andere werken, onzichtbare lagen, inzoomen, tijdgenoten presenteren
  • gebruikersscenarios opstellen voor interface. Wie gebruikt het waarvoor? Wat zijn hun wensen/eisen?

Zou je in meer detail uit kunnen leggen wat de afzonderlijke bijdragen van de genoemde partners worden (Rijksmuseum, Kiss the Frog, Q42, Fabrique)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment