1. Original Goals

In January 1997, after receiving our first Intel Grant, we began work on the Global Visual Music Project with the following goals:

a. To develop software for the creation, mediation, and dissemination of real-time multimedia content, including high resolution two and three dimensional graphics, digital audio and video.

b. To develop a networking capability for this software, so that multimedia data could be shared between users in many locations.

c. To organize a high profile event to unveil these resources by staging a networked multiple site public performance with accomplished artists in established artistic and technological venues.

d. To create a web site to disseminate information about our research.

e. To freely distribute the software we create.

f. To develop and publish a communication protocol for networked distribution of high quality real-time multi-media data.


2. Progress to Date

a. Software

Miller Puckette has been developing PD, a graphical object oriented programming language optimized for real time audio and graphics applications. Mark Danks has simultaneously been developing GEM, a set of extensions for PD that enable it to draw on Open GL for control of two and three dimensional graphics. Rand Steiger and Vibeke Sorensen have been working with the alpha versions of PD and GEM, testing, developing applications and content for future performances, and providing Puckette and Danks with ideas and designs for additional objects and processes.

b. Platform

The software we are developing has the capacity to mix and process multiple sources of audio and video while at the same time generating high resolution two and three dimensional graphics and high fidelity audio. It is clear, however, that due to limitations in processor speed and memory architecture we are not able (at this time) to accomplish all of our content generation and manipulation in software alone.

Therefore we have adopted the strategy of using external dedicated digital video and audio processors, controlled via an RS232 serial interface and dedicated objects in our software, for the first phase of our project. This way the CPU’s are concentrated on providing an integrated user interface, generating audio and graphics, controlling the external hardware, and performing signal processing which is not possible in the dedicated devices. The CPU’s are also used for analyzing the live audio and video signals both for data reduction and transmission over the network, and to provide for the use of data from one medium to be used to control data from another (ex. audio amplitude controls color of texture mapping on a 3D object).

As personal computers become more powerful and robust, we plan on migrating more of these tasks into the workstation, at first with internal dedicated co-processors, and eventually with an entirely software based solution which would allow any user with an Intel platform to use the full capability of our software without requiring any special internal or external hardware. Of course, significant advances in microprocessor speed and memory architecture would need to take place before this last goal could be achieved.

Click here for further information on the hardware configuration we have been using for our initial performance experiments.

c. Networking

As anticipated, research has shown that the best strategy initially for networking performance sites is a direct ISDN connection. We are using ISDN hardware, and have developed an object in our software that provides a simple means of sharing data between platforms across the network. We are experimenting with cross-platform networking and are currently planning a series of multiple-site performances for next season.

d. Experimental Performance Event: Lemma One

Lemma 1 took place on September 27, 1997 at the International Computer Music Conference (ICMC) in Thessaloniki, Greece. ICMC coincided with Thessaloniki's summer long festival of the arts (it was the Cultural Capital of Europe for 1997). Our event took place in a large jazz club (Milos) filled to capacity with about 500 people. The performance featured Steven Schick on drum set and George Lewis on trombone. Both performers also had small video cameras mounted on their hands, and microphones on their instruments. The performers were on either side of a large video projection screen, and the audio program was amplified through a quadraphonic speaker system. The three principal investigators ran the computers and associated video and audio devices.

For further information on Lemma 1, including the score, video excerpts, and equipment diagrams, please click here.

e. Communications Protocol

It has become clear to us that for our suggested protocol to have wide relevance it will need significantly more time to develop. This protocol will be useful when the infrastructure is available to do all the multi-media processing in an individual PC, and therefore, until we come closer to that goal, we cannot test and distribute our proposed format.

f. Software Distribution

Alpha versions (with limited documentation) of PD and Gem are currently available for downloading from FTP sites, however we consider these early versions experimental. Use them at your own risk.

View Information and download Pd

View information and download GEM

Kerry Hagen's Pd Tutorials

Guenter Geiger's Linux Pd/Gem Distribution


3. Plans for Next Phase

a. Continuing Software Development:

A significant amount of time and energy will be devoted to continuing software development. Miller Puckette will work on optimizing the basic code, further develop the user interface, and coding new objects. Mark Danks will continue to develop Gem to create additional objects for more extensive use of OpenGL resources, and to facilitate custom graphical processes designed by Vibeke Sorensen. Sorensen and Rand Steiger will work with PD and Gem to develop new applications for real-time video, graphics and music, and will work with Puckette and Danks to develop new objects to meet their needs, and the needs of other users as feedback comes in from those who experiment with the Alpha versions. We are presently only "scratching the surface" of the potential of PD and Gem for generating new content and developing new paradigms of parametric interaction.

We will also develop much more extensive documentation for the software, and as the code becomes more stable we will have a wider distribution of the Beta version.

b. Migration away from External Hardware:

As explained above, we resorted to external dedicated hardware to off load some of the processing of video and audio from the host computer. The next step in the migration of content manipulation into the workstation is to make use of PCI bus based co-processors for audio and video, and to write drivers for this hardware that will make it possible for the user to take full advantage of it in a simple, direct manner. As with the objects we are writing for our external hardware, the presence of these objects in PD will make it easy to quickly develop applications that take data from one source and use it to control another.

In anticipation of significant advances in microprocessor technology, we are continuing to develop objects that allow for the full range of multi-media content generation and manipulation in software.

c. Development of Software for Internet Broadcasts of Multi-Site Performances:

Tools exist today for the broadcasting of audio and video, from one source, to an individual PC with a modest network connection. For example, the public affairs cable television network C-SPAN can be viewed on a networked PC using a web browser and plugins for streaming video. We would will develop tools that will allow us to simultaneously broadcast on the internet performances from two different sites, and give the user/participant software that allows them to make there own decisions about how to combine the audio and video signals (ex. mix the audio, and view site one video in a small window in the video from site two).

In the next phase of our work, this software will be further developed to accommodate switching from among numerous sources of video and audio, and combining data from multiple sites. As this develops, the software will gradually evolve into that which will allow for anyone with a net connection to connect and jam with anyone else with the same tools, breaking down the distinction between audience and performer, and reaching our eventual goal of providing the infrastructure for a truly global visual-music jam session.

d. Establishment of a Multi-Media Server

To facilitate the goals outlined in c. above, a robust and powerful server will be needed to capture and stream video and audio from multiple broadcast sites, and to encourage and accommodate connections between end users. We will develop server software, and search for a partner institution to provide the server infrastructure. This server could also house recordings of performances, so that they would be available for net-re-broadcast.

e. Lemma Two – with net broadcast

By the end of the second phase we plan to have in place everything that is needed for a full dual site performance broadcast. This would naturally lead to a new performance, Lemma Two. This time we would like to have several performances, with both sites changing locations – a kind of double tour. Group one might perform in Mexico City, Rio de Janeiro, Portland, and Tokyo, while group two moves from New York, to London, Prague, and Cairo, for example.

Of course touring is expensive, and requires cultivation of presenters and sponsors. At this time we are optimistic that we could draw on our previous experiences to work with presenters we know in many different cities. However, we can not assure that there will be resources to make an extensive tour possible. If nothing else, we will certainly perform Lemma Two in one pair of locations. Currently plans are under way for a dual site performance, with webcast, in San Diego in Fall 1998, and another between New York City and Portland in Spring 1999.

We see these performances on a trajectory leading towards a true realization of a global visual music jam session that we hope will serve as an inspiration for others to explore this new territory of real-time multi-modal art with the software resources we develop.