Understanding Television Production Cameras

Understanding Television Production Cameras

This is a camera, this is a camera and
this is also a camera – In fact the very first camera. Cameras come in all shapes
and sizes, and their design, function required capabilities depend on their
application. They range from the cameras we keep in our pockets to the
closed-circuit surveillance cameras on our streets, and even the ones we send into outer space. This video will be focusing on the types of cameras used in
television productions, looking at how and why they’ve been designed in a certain way, how they operate and the part they play in the overall production workflow. So what
makes a television camera different from any other kind of consumer or
professional camera? Well the difference becomes a lot clearer when you
understand the nature of television productions, although cameras used for
television differ greatly just among themselves. Which equipment is used and how it’s set up depends on the type of work.
For television, the most common forms are Field, Studio, Electronic field production and Electronic News Gathering. Studio
production – as the name suggests – takes place in a television production studio,
where there is much more control over sound and lighting. These productions
tend to use multiple cameras running simultaneously and the audio is obtained
and processed independently through a mixing desk. The video signals are sent to the
gallery where they are composed by the vision mixer and then once combined with
the audio signals it can be transmitted to our televisions at home. You’re more
likely dealing with talent that will address the camera directly with the
assistance of a teleprompter, and ‘talking heads’ is one of the more common styles
of composition. Examples of these always tend to be newsrooms, talk shows, quiz
panel shows and most sitcoms. It’s where all your green screen is going to be and
there’s a lot more people involved in the general workflow. With the exception
news broadcasts, many of these shows will accommodate a live studio audience, though the shows themselves can be live or pre-recorded. Field production takes place
external to the conventional television studio and may involve just a single
camera, with sound being recorded in-camera or via a portable external
recorder. With field production, you’re going to be working with various natural
light, or perhaps in the dark, so you’d want to be using a camera with good
sensitivity. The environment isn’t going to be as controlled as it would be in
a studio – the weather may take a turn for the worst and you may need to move
to get your best shot, so all the equipment needs to be fairly mobile.
Shoots of this nature could potentially spread across days or even weeks where
there may not necessarily be access to an external power supply. What should
come to mind are documentaries or perhaps segments of a show involving
social experiments in public, rather than a live show – though field production
material may be included during live studio broadcasts. This type of work has
been known as Electronic News Gathering or ‘ENG’ though the
the term has gradually become less common. Electronic field production, again,
takes place external to the television studio and instead at a dedicated venue for
a specific event which accommodates for camera crews. As such, it’s normal for
television networks to bring a production truck to transport equipment
that can enable a live broadcast feed from both handheld and hard-mounted
cameras. Some common examples of EFP include music festivals, political
conventions or sports coverage. If you happen to be filming in a stadium, you
may be quite far away from the action, so a useful lens to use would be one with a long focal length and high zoom ratio. For this reason also, some broadcasters are
even starting to use 4k cameras so that they can zoom in digitally while
maintaining a broadcast-standard resolution. All the fast movement
expected from sport would make it handy to use a camera capable of shooting with
higher frame rates for smooth slow-motion playback. These scenarios may
already be suggesting what else these cameras may need to be able to do. Studio
cameras are designed to work in an environment where there are also other cameras
being used. Having them work together efficiently in the studio is aided by
their ability to be ‘Genlocked.’ This is a synchronization of multiple video
signals via the output so they can record on a shared time code and therefore refer
to the same timing information. These cameras will also need an illuminated
tally which lets the camera operator and the talent on-screen know which camera
is live. It’s one of the many ways in which the cameras are communicating with
a gallery. The addition of an illuminated tally is part of the process for converting
an EFP camera for use in a live, multi-camera studio. So why are studio cameras so
big? Anyone who has had a glimpse of the behind the scenes footage of a film or
television show has thought about this question. It’s understandable, and people
will be tempted to compare them to the cameras aimed at consumers and even the
more advanced ‘prosumer’ cameras without realizing the difference in technical
demands. One of the crucial features of professional cameras as a whole is that
many of the parameters can be adjusted independently and ergonomically with the
panel of manual controls designed for each function. For consumer and even
prosumer video cameras many of these functions – whilst being more commonly
present – are often buried in menus that require navigation by means of a single
directional pad on the rear of the camera, which is no good if you need to
keep your eyes on what you’re shooting and make adjustments on-the-fly. With
professional cameras, the parameters accessible from the menus are more for
things like playback and display settings, recording format and metadata –
most of which are configured prior to or after shooting. So I’ve spoke about
metadata, but what is it and what’s the point? It’s a means of logging important
details from a shoot which may prove useful when someone is trying to find a certain
sequence in the editing stage, or for when footage is being archived for
future viewing. Some extensive projects may well have weeks of footage in need
of sorting. Basic forms of metadata that people encounter most are file size and
the date and time a file was created. For film and television production a lot
more information is required, and typical forms of video metadata include the make
and model name of the camera, name of the camera operator at the time, the name of
the show, the set name, the location, the timecode in and out and usually there’s
some additional space for any other notes. It’s saved to all the files of the
memory card so that wherever the footage goes, this information goes with it. These
cameras don’t just differ in terms of software but also hardware. All the
buttons, triggers and switches not only require more space on the body of the
camera, but each of these designated controls feeds into a designated board
which takes up space inside the camera as well. And once all this technology is
crammed inside, what’s going to keep it safe from damage? A sturdy and resilient
chassis that may well be subject to not only physical impact but also strong
weather conditions and high and low temperatures. All of this technology
results in a relatively large power consumption which may need to be handled
by a pretty big and hefty battery which in turn is in need of its own secure
mount. Mounting on the outside of the camera makes it quickly accessible,
presumably more breathable and allows for a bigger battery, although having it
exposed like this would require for it to be fairly robust. Now things get a
little bit more technical. The larger the camera, the larger the sensor can be. The
larger the sensor, the more information it can receive from the lens, which in
turn means the better the image quality. So how is that any different from saying
“the bigger the shoes the faster the runner”? Well, a bigger sensor usually
means larger pixels, and the increased surface area means a higher photon
capacity, a higher signal-to-noise ratio and high dynamic range to preserve the
details of the image’s shadows and highlights. When it comes to sensitivity,
f/11 at 2000 lux seems to be the benchmark – f/11 allowing for a reasonably
deep focus, Lux being a unit of light and 2000 lux being the light intensity that
you’d expect on a typical overcast day. Another benchmark is that of the camera’s
signal-to-noise ratio, measured in decibels, which is a way of expressing
the ratio between the strength of a signal and background noise – such as in
sound – and the visible effect in videos takes the form of graininess which is
superimposed on the image. The common conception is that anything above 50
decibels will produce a fairly usable image with some minor grain,
although most professionals and ENG cameras will have upwards of 60
decibels, which in terms of signal-to-noise ratio will result in a
very usable picture with little to no noise. Then we’ve got to think about the
lens. The run-and-gun nature of live television or coverage of live-action
events means camera operators don’t have time to switch between different lenses
during a shoot. Instead, they’ll use one zoom lens that covers a wide range of
focal lengths and apertures. This will result in a big, hefty lens that requires
a sturdy camera body to mount on to. Naturally, it will alter the camera’s
center of mass, so the body will be designed to counter-weigh this in order
to maintain stability. Much like lenses for movie-making applications, another
important aspect of the studio camera lens is that they are ‘parfocal,’
meaning the lens can hold its focus on subjects when the focal length changes.
This allows for a sharp image over a long focal range without the need to
refocus. It also means you could refocus a shot without throwing off the focal
length – commonly known as ‘lens breathing,’ and this may be referred to as a
‘Constant Angle Focusing System,’ where a CPU monitors the displacement of the
optics and all zoom and focus movements are internal. This technology is
difficult to manufacture and requires more optical components than a simple
prime lens, which in turn results in a bigger lens. Box lenses are identified by
their box shape with no visible external controls. You may be wondering why
they’re shaped like a box when the optical elements are still round. It’s
because the CPU and all of the technology inside exists in the form of
boards which are straight and are therefore placed on the perimeter of the
round optics within a box external housing. These lenses will have their own
CPU that will interpret lens control functions electronically from a servo
zoom demand, which looks like -this-. It gives you easy access to the various
lens and camera functions through a multi-pin cable each pin carrying a different
signal. This CPU can make precise zoom and focus controls repeatable, which
would be useful for doing multiple takes of the same sequence, and the server
demand has a potentiometer to alter the speed of the zoom. Then there’s ENG
lenses, which bear a much similar resemblance to the 35 millimeter cine
lenses by having a zoom ring, aperture ring and distance markings. The build
quality of these ENG lenses is where compromises are made in order to have
something smaller, fairly lightweight and still perform well on the go, whilst
being compatible with a servo and covering a similarly wide range of focal
lengths and apertures as the box lens. Both types of lenses will use what’s
known as a B4 lens mount, which unlike other lens mounts is optimized to work
best with the beam-splitter technology that’s found in three chip cameras such
as 3CCD or 3CMOS, where three separate red, green and blue sensors are assembled in a ‘prism’ to enhance the precision of
color reproduction. Three-chip cameras can also utilize the technology known as
‘spatial offset processing’ which is an effort to reduce aliasing and chromatic
aberration and enhance resolution by shifting the red and blue sensors a half-pixel horizontally, and sometimes also vertically, with respect to the more
sensitive green sensor. This would make the combined RGB pixel larger, meaning
that they’re more sensitive and therefore able to carry more information
that helps create a better quality picture. That’s another expensive
manufacturing process that impacts on the cost of these types of cameras. By
looking at professional lenses you’ll see a lot of -this-. So how can a lens be
HD? And why aren’t standard definition lenses good enough? After all, glass is
glass, isn’t it? Not exactly. When the early standard
definition lenses were being designed in the first place there were no ‘high
definition’ cameras to test them with. Hell, the technological standards for
‘high-definition’ weren’t even established yet. So what are the criteria for a HD
broadcast-standard lens? Well it tends to revolve around having a high MTF
characteristic, a maximized contrast performance, minimal levels of distortion,
minimal lens flares and internal reflection and sufficient color reproduction. MTF stands for ‘modulation transfer function. The MTF of a lens gives a good indication of the spatial frequency that it can resolve.
It’s tested by using black and white burst charts to see at which points the
combined efforts of the lens and camera are unable to distinguish the white from
the black, and instead just create a grey mush. A television production camera
should be able to resolve a high spatial frequency, which is measured in terms of
horizontal lines of resolution. Not only that, but this high level of performance
needs to be reasonably consistent across the entire image plane, as certain
attributes progressively deteriorate as distance increases from the central
point. To meet such standards, optical engineers rely on the accumulative
surface tolerances of up to 30 optical elements falling within certain
nanometric specifications. …or in plain English, everything’s got to be on point.
But it’s no excuse to keep adding more and more glass, since each optical
component becomes an additional obstacle obstructing the path of light, which in
itself can lead to distortions of the image. From an economic standpoint,
production companies and broadcasters will intend to buy this equipment for
long-term use, and so it makes sense that they expect cameras to handle whatever
conditions are thrown at them, which you could say is just the nature of the
business. The cost of a reliable lens goes above and beyond the cost of a good
camera. These cameras will also accommodate a larger viewfinder, which
can aid in composing shots and afford more screen real estate for
scopes, such as a gamma table. The gamma table acts as a useful tool for using
custom gamma to fine-tune the exposure of a shot, and can be used in the gallery
in order to match up shots from multiple cameras through matching the gamma
curves. Broadcasters also have a preference towards video that has been
captured with little-to-no chroma sub-sampling or compression, and the final
output video undergoes a color sub-sampling at a ratio of 4:2:2
ready for broadcast. This basically means that in a given sample four pixels wide
and two pixels high, every two pixels in each row have to
share the same chroma information. In other words, 4:2:0 and 4:1:1 both contain
too much down-sampling, and if you tried to chroma-key it for a green screen, the
edges just wouldn’t look natural. Understanding what is and is not
broadcast standard in the technical sense is made easier by reading the
technical delivery standards documentation that are published by
broadcasters such as the BBC. For example, they outline that content delivered for
HD transmission in the UK must have a spatial resolution of 1920 by 1080
pixels – Full HD – thus creating an aspect ratio of 16 by 9. It also demands that
final output footage have a temporal frequency of 50 fields per second in
interlaced format, notated as ‘1080i 25’ or ‘1080i 50Hz.’ The rationale behind
these specifications is a little bit complicated. 1920 by 1080 for 16:9 was
agreed upon when the specs for HD television were being drawn up way back
in the late 1980s by the Society of Motion Picture and Television Engineers. The idea was put forward by a Dr. Kerns H. Powers on the grounds that the ratio was a geometric mean between the outdated standard definition ratio of
4:3 and the widescreen cinema ratio of 2.35. A refresh rate of 50Hz was established way before, and is used to
match the mains frequency of the AC power in the United Kingdom, which due to
being in the PAL region is also 50Hz. This prevents the hum from the electric
current from producing a beating distortion to the image, also known as
intermodulation. In the United States, part of the NTSC region, this would be
60Hz, notated as ‘1080i 30’ or ‘1080i 60Hz.’ ‘i’ stands for interlaced format,
which is a method of imaging designed to conserve bandwidth, done for cost reasons. Whilst progressive imaging works by consecutively showing a single unique
frame at a time, interlacing divides the picture into
upper and lower fields, which alternate at a rate of 50 times a second in the PAL
region and 60 times a second in NTSC. It happens so fast that our brains’
‘persistence of motion’ means that the individual fields are undetectable to
the human eye, and our brains will register an ‘after-image’ between each
field to help interpret it as a complete moving image – and that’s how we watch
television! The conversation of broadcast camera technology is virtually endless,
though I hope that this video has offered a detailed overview of the many factors
that aid broadcasters in creating the best quality material for the public.


100 thoughts on “Understanding Television Production Cameras”

Leave a Reply

Your email address will not be published. Required fields are marked *