in Video Digerati
That’s right, media server programmers can now pre-visualize video clips, effects and how the full stage looks in real time with the video playing.
Still not sure what this means? Think WYSIWYG or ESPVision for video — but on steroids.
Real-Time 3D Modeling
Both the Ai and d3 are revolutionary, real-time 3D modeling media servers. The user can create a 3D model of the stage in a program like 3DS Max, Blender, or Cheetah3D and import it directly into the stage pre-visualization environment.
Using the media servers’ own Device profiles for projectors, screens and LED fixtures, the entire video and LED “picture” can be created directly on the media server. Once this has been done, cues can be programmed directly on the server and then either triggered via internal timelines or external DMX or timecode triggers.
Now, instead of using one server for the media, then feeding the video output of that system into another system for pre-viz programming, everything can all be in one machine. This greatly simplifies the setup, reduces the FOH footprint and has the additional benefit of better performance.
Eliminating signal processors and video cards in between server and projector means fewer dropped frames, reduced delay and fewer opportunities for signal degradation. (Plus, it looks really cool to see a real-time rendering of the entire stage complete with video at FOH.)
Both the Ai and d3 servers also allow you to pre-visualize a multiple projector blend with all of the edge controls that you need to get the blend just right. Being able to pre-viz a project that utilizes more than one projector means the projectors can be positioned exactly where they need to be, and you can also calculate distances and lenses before stepping foot in the venue.
3D Mapping Controls
Another big step in feature sets for media servers is the move to 3D mapping controls. Both Ai and d3 offer this feature, as does the new Arkaos Media Master 3.0. Using mapping tools that are similar to keystone correction, the contours of a surface can be defined and then video can be mapped onto that surface with precise accuracy. And because the server is outputting the signal in real time, you will see on the screen exactly what is being output to the projectors on the real stage. I find this feature to be one of the most useful and exciting new features in media servers today. As the popularity of 3D architectural mapping grows, this feature is rapidly becoming an absolute must-have for any professional-level server.
It also seems to be the trend recently in media server land to have internal Cue storing and playback. For many years, media servers have been a kind of tool for storing media clips and generating video effects that are then stored in cues on a DMX lighting console. But this left out a lot of potential users who have no lighting console programming experience. Media servers like the Ai and d3, though, have changed this by integrating internal cue storage and playback ability. This means that video engineers who are interested in using servers with real-time media manipulation capabilities now have some exciting options. Combined with the pre-visualization environment, the user is able to see the cues directly on the server’s screen as well as on the live stage along with all timing and transitions.
Lastly, both the Ai and d3 also have unique media generating tools similar to plug-ins found in video compositing software like Quartz Composer and After Effects. Effects and modules like particle and shape generators, shaders, texture mapping, tunnels, and light rays are just some of the types of generative media content that can be found in the Ai, for instance. Content generation in real time on the server is truly like having a video editing suite at your fingertips because you don’t have to wait for an effect to be rendered before being able to use it in the show. You can create it right on the media server, look at it on the virtual stage, and if you are happy with it, you can store it as a cue, drop it in the timeline and use it in the show.
As the amount of video used in productions increases while budgets shrink, it is becoming more critical that video programmers be able to see the entire scope of their productions much earlier in the process. By using servers with pre-viz capabilities, everyone from the video and lighting directors to the artist can see what the stage will really look like once on site. And since it can be achieved with the same system, this will end up saving time and money for all. (But I won’t tell anyone if you don’t.)