We invented the award-winningAdaptive Motion Control Technology


Camera movements as if made by humans

Computer Vision, Machine Learning & Cinematography.... this is the Adaptive Motion Control Technology from Seervision. The camera- and robot-independent technology represents a revolutionary change in studio and sports production. It makes it possible to achieve consistently high quality with significantly lower production costs and allows for innovative production techniques. Through automation, we take video production to the next level.

A cameraman needs to understand what is going on in his camera’s viewfinder. Our computer vision pipeline relies on the newest and most advanced neural networks to analyze every single frame captured by your production camera in order to detect objects of interest, the background and the overall scenery. Like a cameraman, we create an internal model of the scene so we can build information that propagates in time - a live map of the scene and its points of interest so you can design your production around them.


Scene perception and environment modeling: Real-time object detection

Camera path planning to optimize cinematography

Moving a camera and lens like a human is perhaps the most challenging task in our technology pipeline. It is often underestimated at first but when you see a robot trying to adjust a camera to a moving object using live tracking, you quickly realize why real-time optimal control is one of the most challenging tasks in robotics and electrical engineering in general. We use the output of our scene perception and environment modeling pipeline to create models of the scene now and a few seconds in the future. Based on this we plan the best possible future positions for the camera and lens and make sure to respect constraints imposed by the robotic system we are controlling. This can be our own robotic head or any other robot with a known motion model.


Cinematography is an extremely complex task that combines human creativity and dexterity in handling complex multiple-input camera equipment. Considering the fact that camera operators often have to follow a director's instructions in harsh conditions, it is not an exageration to call the excellent ones TRUE NINJAS. No wonder skilled camera operators are an expensive asset. In order to build assistive automated systems for cinematography we go beyond traditional methods in robotics and control. We need to identify the optimization problem that an operator is solving so we can automate mundane parts of executing a shot while the operator can still adjust all creative parts. In this example we plan a path for the camera (red) that reacts to the movement of an object (blue) in order to keep a constant distance.

Practice, Apply, Learn, Teach, Repeat

A cameraman is not born with the skill to produce the awesome shots we see in 'Planet Earth', 'Roland Garros' or 'The Voice'. It takes years of practicing, first at low-end events, before one is ready for the “big scene”. The same holds true with our autonomous cameramen. Even better actually since our systems learn and improve throughout their lifetime. We have created a virtual environment, in fact it is a full blown computer game, where we can reproduce any video production setup, deploy tens of cameras and test prototype control algorithms. We evaluate their performance in photo-realistic conditions, collect the data needed to improve our controllers, adjust relative design parameters and deploy them again. We can do this automagically, tens of times per day, also taking into account reference footage that we have collected and analyzed from real-world live video production. Machine learning at its best!


Real-time object detection