Seervision Upcoming Feature Autonomous Workflow blog banner

What is an autonomous workflow?

At Seervision we often talk about our mission to create autonomous workflows that make video production effortless. Our goal in this is to provide the means for AV productions to essentially run themselves. We are building a toolset that allows AV professionals to design fully autonomous systems. The Seervision Suite allows constant communication between all kinds of devices and helps solve any type of AV challenge!

How does it work?

Our vision is built on a set of tools that can be used to create seamless end-to-end-workflows that can run entirely autonomously. The core features of the Seervision Suite are the building blocks of autonomous workflows, and they’re already in action.

Through our advanced human detection and prediction algorithms we can create an understanding of the overall scene. With Seervision the cameras not only keep subjects perfectly in frame at all times through our full body tracking feature but understand what is happening in the scene and can dynamically react and at the same time communicate this information to other devices in the AV ecosystem.

Containers are the way of Seervision to trigger multiple actions, they can hold a variety of preset parameters and can be recalled manually from the user, through the API or automatically through the Trigger Zones. With containers Seervision can consistently deliver reliable and dynamic shots.
Trigger zones tie the whole thing together, allowing operators to build a sequence of actions based on a subject’s position in the scene.

Upcoming features

In 2022, we’ll be releasing a series of features that will enable operators to elevate their video productions even further. These new features will allow our users to create fully autonomous video production workflows without any need for intervention by allowing the Seervision Suite to become the brain of the AV ecosystem and communicate with all the different devices in the production environment. You can think of other cameras, video mixers, audio devices as well as lights.

Recognise specific talents in the frame

Our first new release will build on our already advanced computer vision algorithms by creating a system that will recognise specific subjects. Through advanced recognition we can create persistent IDs that are assigned to a subject before a production. This allows users to trigger specific camera actions and show subject-specific overlays based on specific talents entering the frame without manual intervention. With minimal setup time, the Seervision Suite will store and recall actions directly related to your speakers.

Secondary subjects

Augmenting Persistent IDs, this feature will allow users to create a specific camera action when a second subject enters the scene. Introduce your keynote speakers with ease by extending to a wide shot when they enter, instead of trying to squeeze into the same space. Decide who the camera will track in pre-production and watch the Seervision Suite take care of the rest. By leaving these transitions in the hands of our software, operators can focus on their creativity.

Audio-based camera switching

This is one of the most exciting features we are looking to offer in 2022. We already have a way for camera actions to be combined with audio-based triggers. The communication is bi-directional which gives immense possibilities for the whole event automation, the microphones can be pointed to the specific person based on Seervision’s scene understanding and the cameras can switch and reframe between speakers or between speaker and audience dynamically using positional data provided by advanced studio microphones. The sensitivity of this can also be fully customized to prevent switching due to interruptions.