Zou je voor ons jullie eindproduct, de interfaces, een stuk concreter kunnen maken? We hebben nu nog niet geheel een idee wat we ons daarbij voor moeten stellen.
Central to this project is the the following paradox:
The Rijksmuseum's masterpieces can only be fully experienced through gestures that are forbidden in a traditional museum context.
Digital media offer a way out of this paradox by setting up a virtual space where these forbidden gestures are in fact allowed. More often than not, however, the resulting digital environments remain detached from the physical exhibition. They offer an autonomous experience that is best enjoyed in the comfort of a visitor's home rather than the museum itself.
In this project, we want to develop new interfaces for the Rijksmuseum that connect the digital to the physical collection by facilitating (ideally all) of the following gestures:
The visitor should deviate from the designated paths and open locked doors.
The total collection of the Rijksmuseum consist of 1,000,000 pieces, but only a fraction (8,000) of these works are on public display. In order to make a larger number of artworks explorable for the public, simply digitizing the archive is not enough however. Interfaces are needed that not merely give the user access to the archives but also provide her/him with the tools for orientation.
It is easy to overlook both the importance and complexity of making the archives of the Rijksmuseum accessible to the public. A key part of this process is their API (Application Programming Interface.) This interface currently makes 130,000 images and over 1,000,000 data entries available to developpers, scientist, and a technically skilled public.
Unfortunately, however, most users are not equipped to access the collection programmatically. In order to compensate for this lack, the Rijksmuseum has collaborated Kiss the Frog, Fabrique, and Q42 to create some award-winning user-interfaces (Rijksstudio, mobile app). In this project, we want to build further upon the API and available interfaces to develop additional (features of) applications that make the otherwise hidden collection available.
The visitor should tear the masterpieces apart, peel off their layers, and destroy their aura.
In a traditional museum context a masterpiece is presented as a completed product rather than a fragmented process. However, it is often the small and hidden details and the (past and future) trajectory of a masterpiece that define it. For that reason, it is important to show the visitor the different pieces and layers that an artwork consist of.
Additionally, the aforementioned API also enables a new way of dissecting artworks. Apart from layers and pieces, they can now also be dissected into bits. Data can be used both to reconstruct the original artwork, as well as place them in new contexts and narratives, such as:
- a history of its reception
- the biography of the portrayed people
- similarities and differences with other works of the same artist or its contemporaries
- the spatial and conceptual itinerary of an artwork
Shailoh Phillips' earlier, award-winning project Go van Gogh. A Global Treasure Hunt (website)[http://www.govangogh.org] in which she traces the complex displacements of Van Gogh reproductions can be seen as a prototype for such an approach and interfaces
Currently, Rijksstudio and the mobile app already present some these dimensions of the artworks to the user. They allow users to zoom in on high-resolution images, peel off the surface to discover hidden, earlier version or completely different paintings, and to search for stylistic and affective metadata (color, mood, etc.). Nonetheless, these interfaces are not integrated in the physical collection. In this project we will develop interfaces that not only reveal the small details and hidden layers of the masterpieces, but are actually part of the experience of the visitor.
On the output side, we want to develop the following:
- large transparent monitors that are placed in front of the actual artworks and provide the visitor with recontextualizing information
- cell phone and tablet applications that provide the user with new ways to interact with the artwork: zooming in, peeling off layers, replacing (parts of an artwork) with sketches, studies, actual photographs, etc.
On the input side, we are thinking of the following interventions:
- Using body sensors to determine a user's affective response to individual artworks or the collection as a whole.
- Using light, temperature, and sonic sensors to determine
Both these two interventions make it possibles to bypass some forms of metadata altogether.
The visitor should augment the artwork with new layers of information.
The visitor should take the (augmented) artworks home or bring them elsewhere.
The interfaces that we want to develop are aimed at transforming the visitor from a consumer into a researcher. However, one of the most important aspects of doing research is to share the results and discuss them with others. For that reason, the Rijksmuseum already publishes all of the data available via their API through a Creative Commons License.
The legal dimension of dissemination, however, is only part of the story. In order to further stimulate knowledge production we want to ensure that the output data produced by our interfaces can be used as input for further research. This means that in the future the API should not only cover the 'original data' but also the new data that is added by the visitor/researcher.
Zou je in meer detail uit kunnen leggen wat de afzonderlijke bijdragen van de genoemde partners worden (Rijksmuseum, Kiss the Frog, Q42, Fabrique)?