There’s a new game in Techtown, and it is becoming the industry standard for remote collaboration over the cloud. Frame.io recently launched Camera-to-Cloud and it shook our workflow foundations. Michael Cioni, Global SVP of Innovation, joins our host, Cirina Catania to explain in a deep dive under the hood how this will enable collaborators to transmit footage from camera/audio to their remote teams anywhere in the world, faster, easier and more secure than ever before.
In This Episode
- 00:01 – Cirina introduces Michael Cioni, the Global Senior Vice President of Innovation at Frame.io.
- 02:23 – Michael explains what Frame.io and Camera to Cloud are, and how long he and the team have worked on them.
- 06:45 – Michael talks about the cameras and audio equipment that are compatible with Frame.io.
- 13:05 – Michael shares the process of how files are copied and sent to Frame.io.
- 17:41 – Cirina asks Michael how DITs (Digital Imaging Technicians) feel about Frame.io.
- 23:09 – Michael talks about Frame.io’s naming convention and how it syncs up audio and video files.
- 29:38 – Michael explains how secure Frame.io is and its process of authentication and authorization of editors.
- 36:32 – Cirina asks Michael how they made Frame.io’s file transfer function faster than other applications.
- 46:21 – Michael gives some tips on how to handle the versioning of files in Frame.io.
- 49:43 – Michael shares what he sees for the future of post-production.
- 58:50 – Cirina and Michael encourage listeners to visit Frame.io to learn more about its latest innovations. Also, check out its new technology, Camera to Cloud, at frame.io/c2c.
Michael, I’ve been bragging about you at the beginning of the show and here you are. I’m so happy you’re with us. I want to talk to you today about Camera to Cloud, Frame.io, post houses of the future, where we’re going. I think this is going to be awesome. It’s always fun talking with you. So welcome.
Thank you for having me. Like you said, we’ve known each other for years and been doing this a long time. I’m always happy to be back with you and always happy to talk to the listeners where we want the feedback because what we’re going to talk about today involves everybody. We’re looking forward to more communication with the community.
It does. When you announced it, there was an immediate reaction in the industry about what you had presented. There is just a huge audience for this in media producers worldwide no matter what you’re in. I think you really are going to change the future. For people who don’t know, what is Frame.io and what is Camera to Cloud?
Frame.io is an online review and approval database. It’s the most popular professional review and approval tool in the world. We have over a million people uploading millions of assets a day into the platform and sharing them, reviewing them, marking them, and delivering them. It’s a really interactive space—professional video sharing and collaboration—in the Cloud. Camera to Cloud is now a new feature set of Frame.io where we can actually have people on the set, in the field, and shooting directly into Frame.io, so that you can have access to everything you’re doing, even deliver it right to an editor while it’s still being shot in the process.
Shoot directly into Frame.io. You make it sound easy. I know this didn’t happen overnight. How long have you been working on this in the background?
The answer is about 10 years. A good friend of mine, Laura Pursley, said after we announced that she goes, “You’ve been working on this for 10 years.” She’s like, “This is your life’s work. You’ve been gunning for this for a decade.” When she said that I released a lot because I think she’s right. But essentially, this technology, Frame.io didn’t invent the idea of Camera to Cloud. We’re not the first people that whisper it into the market. There’s a lot of people that have been trying it, and I tried it a decade ago, but the infrastructure wasn’t there.
The way to make it work and the way to make it scalable and easy wasn’t there. We didn’t have things like AWS 10 years ago, 4G was brand new. Remember, smartphones were new 10 years ago, and we didn’t have ways to leverage LTE networks. Even home Wi-Fi was far weaker than it is today or business Wi-Fi. These are all elements of evolution that had to take place in order to set the stage for a direct camera and audio to post-production relationship to occur. Now that that stage is set, we’re at the very beginning of turning it into a total industry standard.
When you made the announcement, you had a union crew and it was tailored towards large theatrical productions. I’m thinking, there’s a lot of other companies that could use this too. Have you thought about introducing this to news or independence? What do you think?
That’s a great point. I guess the question is, who is this for? The answer is everyone. It’s for everyone. If you think about it, when we take out our smartphone, we are doing camera to cloud and audio to cloud every single day because we take out our phone, we see a picture, we see a moment, we shoot a video or we shoot a still, then we bring it in, we edit it real quick in the palm of our hand, and then we share that to our social media chain. What you basically do when you do that is you shoot, you edit, you do post-production, and then you distribute it in your studio.
On a small scale, billions of people a day are shooting, they’re editing, they’re sharing, and they’re distributing. If we can do that in our personal lives, why can’t we do that in our professional ones? The infrastructure for shoot, look, share; camera to cloud; or audio to cloud is already in the social media world. Now we have the opportunity to start doing it to the professional one. While we demonstrated this with union crews, we want to demonstrate this works in a very large infrastructure when you have separate audio and separate sound in multiple cameras, and you’re dealing with multiple departments and multiple editors on multiple platforms.
That was our intention is to show that it can have the width to handle a large studio production style, but it’s always scalable. Once you can handle that, it’s easy to pare it down and scale it down. That’s the idea of where we want to take it to. We want this to be a tool for TikTokers as much as we want it to be for OTT Studios.
Independent production has become so much more sophisticated than it was a few years ago. A lot of what you were talking about in the demonstration, and by the way, those of you listening, if you have not seen it, it’s on YouTube. You can go to Camera to Cloud, you can see the full demonstration in all detail of exactly how it works. I do want to unpack it a little bit for our audience as well. What cameras at the moment does this work with?
At the moment, this starts with SDI cameras, cameras that have a BNC SDI port on them. Basically, the entire RED fleet of cameras, the entire ARRI fleet of cameras, we now have Canon, Sony, and Panasonic SDI cameras. The objective is to start with SDI because SDI is a protocol that has certain rules that people follow. If you follow those rules, you can take that protocol, and it’s easy to implement the same result in the cloud.
What people obviously want is they want to include other cameras that don’t have SDI. We want to do that too, but those are using HDMI. With an HDMI camera, there are different sets of rules, expectations, and standards for how to make that communication work. Obviously, the amount of cameras that have HDMI outputs versus SDI is significantly disproportionate. There’s a lot heavier lift to get all the HDMI cameras to work this way.
Remember, this is brand new stuff. There really isn’t a way to do this. We’re building the bridge that we’re standing on, but all the manufacturers have been super engaging, they recognize it now, and they see the opportunity. Everybody wants what I can do on Facebook or Instagram on my phone. They want that in every camera, which is why ultimately, this will touch everyone. To start, we’re fixing everything with SDI cameras first, and then we’ll expand it into the rest of the ecosystem.
I’m being selfish. One of my films is sponsored by Blackmagic. I’m hoping we’ll get there very soon.
We want Blackmagic more than anyone, honestly. The Blackmagic company is so important to the industry. We basically have to have the same basic protocols for SDI outputs or HDMI outputs to basically allow for the technology to have a ubiquitous language for transmission.
Okay, unpack this. We’re talking about the cameras. Now, let’s talk about what equipment you need for audio to record, and then I want to go into the actual workflow on the set a little bit.
The cameras connect into a box made by the company Teradek called a Cube and the model is 655, that’s the model. I think it also works with the 605. There’s a couple of models in the 6 series that work with Cube. You connect your SDI camera output to a Cube input, and the Cube authenticates to the internet and Frame.io. Every time you hit record, it publishes that take. I’m actually doing it right now. What people are seeing right now is a RED camera. A Helium connected to a Cube and that Cube is connected to Frame.io, and when I hit cut, it will publish that file automatically to Frame.io so Cirina can post it later.
By the way, everybody listening, I was laughing with Michael right before we started because of course he has a Helium camera. The Helium cameras are the top of the line, it’s great. I’m recording this on a BRIO webcam. You’re on the Helium, you’re recording to Frame.io. We were about to talk about audio before I interrupted.
With audio, you can either run a scratch track into the Cube so you get audio automatically synced to it. If your cameras have mics on them, you can tell the Cube to just receive the audio through the SDI. You can have audio from a mic on the camera passed into the Cube. If you’re doing dual-system audio and your record with a separate system, today only, but the first company to have an internet connection in their field recorder is Sound Devices.
They have a new series called the 8 series. There are two models, the 888 and the Scorpio, and those are field recorders. The 888 is an exceptionally high-quality field recorder. I mean, this is top-of-the-line audio quality, audio noise reduction, cleanliness, multi-channel, very, very high-end control. What you do is you record your sound, and every time you hit cut, it publishes that audio to Frame.io. You can still record it locally on the device, but in the background, when you’re rolling the next take or the next interview, it’s publishing the previous take or the previous interview to Frame.io so everyone has access to it.
You can either sync the audio later and post, you get picture and sound. You can sync it in Frame.io, we have a beta out for automatic audio-video syncing. Or you can run that audio into the Cube and it’s already synced for you. There’s a lot of variations. I think what’s most important to think about is if you think about history, when there’s a technological development in our industry, it always happens to audio first. Why is that? The number one reason you could probably guess is because audio files on average are smaller than video files. When nonlinear editing came out, it actually was easier for the audio systems to do it before picture.
When online finishing mixing came out, it was easier to do a finishing audio mixing than it was to do video mixing. Video mixing would be called color correction. That was still on tape when audio was on drives. When it came to the transmission through the web, it was easier to stream radio stations and audio and music before they could stream video and pictures. Things happen to audio first before they happen to picture. If you think of CDs, audio was distributed digitally before video was distributed digitally because of file sizes. CDs to DVDs had more than a 10-year gap.
What we think is going to happen, which is really cool, is we’re going to see rapid adoption of Audio to Cloud technologies directly in the audio recorders themselves. Over the next two years, you’ll see every audio manufacturer will have a cloud integration with Frame.io, and they’ll be publishing those audio assets right to the Cloud. Even if you want a local copy on an internal drive, you really have that as the backup. The hero audio is now already backed up in the cloud and confirmed. That is going to be the trend. Over the next 10 years, we’ll see that trend also develop into video because the video files are bigger. So it’s harder to do those transmissions, which is why today, we’re using the Cube to make proxy files.
You’re shooting audio, you’re shooting video, you’re sending those files to Frame.io. What goes to Frame.io are the proxy video files, right? Can you talk about what’s the payload and what are we sending?
That’s a great question because a lot of people will say, what if I’m not online, I don’t have internet, what is the case to doing this? The Cube is capturing a little tiny proxy file. Basically, the quality files you would get on YouTube are H264 files—they’re either HD or UHD—and they’re somewhere between two megabits per second and maybe six megabits per second. That’s what the Cube does. You set it between two megabytes and six megabytes. I’m shooting this at about four, and it’s a YouTube video in a way, but it gets the file name from the camera, it gets the timecode from the camera, and it gets the triggering from the camera when you hit record and stop.
This proxy file is essentially a little tiny version of the original camera file that’s instantly uploaded. When you do a transcode, if you shoot on a camera and you transcode those files, this is doing the transcode for you automatically while you’re shooting it, and then it’s publishing that transcode to Premiere, Final Cut, or Resolve through Frame.io so that your editor or your review people can just see it.
We even have people taking these files and transcribing them right away. Because once it’s in Frame.io, you can send it to a company like rev.com and instantly have your interview transcribed. You didn’t have to download anything, upload anything. You didn’t have to transcode yourself. It’s already done and ready. These are the types of step-skippers that are so efficient and so speedy that it’s just going to become rapidly adaptive because people can see the instant return of the speed delivered with Camera to Cloud.
The Cube has an SD card in it, so you’re still able to keep all of your original files so you have your original.
That’s right. The Cube will record on board. Let’s say you have a weak internet connection, a poor internet connection. You would want to be using an LTE hotspot. You can actually use your phone as a hotspot if that’s all you had. The market has hotspots from $150 to $10,000. These are all different sizes of hotspots. If you want a more powerful hotspot, it really comes down to the quality of the antenna. I think that’s an important data point.
If you’re going to do a lot of field camera to cloud or audio to cloud, the hotspot is useful. It’s certainly a component, but the antenna is almost more important than the hotspot. A good quality hotspot is good, but if it has poor antennas, then it’s not going to really be very useful. Your phone has a pretty average to low-quality antenna because the antenna is wrapped around the edge of the phone and it works for a phone call, but you know calls drop and you know that sometimes you have buffering problems and things like that. That may not be the network that’s struggling, it may be the lack of antenna.
Investing in a good antenna system—a company like Peplink makes modems and antennas that are really strong, more industrial level, that’s a really good investment. The Cube is recording onto itself and so it’s connected wirelessly to that uplink, that LTE hotspot or 5G hotspot as those become more prevalent. If the network is poor, it’ll just save it on the Cube and it will push it slower or in the background, or it’ll push it later when the network becomes available. You don’t actually have to have a network to shoot, but most people shooting in most areas will have some form of a network connection. If you don’t, it’ll record to the Cube and it will publish as soon as you do have that network connection.
I had a feature shooting in Mississippi in February. When they were in some areas of the swamps, there was no network. When they got back to the hotel, they turned the Cube on and the Cube used the hotel internet and just published all the clips. They still watched everything on the same day automatically delivered and transcoded. They just had to wait until they had an internet connection, and the Cube published those files through the SD card later in the day.
That’s awesome news for people like me who do a lot of work in the field. You can’t use a hotspot in the middle of the Amazon. There’s nothing to connect.
Today you can’t. But remember, satellite technology for LTE and 5G is now coming online. Starlink and things like that are now opportunities where we won’t use local towers all the time in order to do telecommunications. Today, we know that in the broadcast world, satellite technology is used to move signals around. But in the consumer world, we’ve relied heavily on Wi-Fi or LTE, which is tower-based or local-based. That’s going to change and so it means eventually, the entire globe will be online even in the Amazon.
Even when I got back to Lima in Peru, the internet was so bad there that I was trying to upload and download, and I was getting disconnected all the time. It’s so slow. This is good news. I love that we can use this technology and not have to depend on the internet. We’ve got the camera hooked up via the Cube, we have audio hooked up, we’re shooting, we have original files that are staying with us locally, we’re sending everything to Frame.io, and then what happens? How does the DIT feel about all of these?
DITs are still serving a really important component in two capacities. One, DITs do a lot of color correction. If you want to really polish a look, you want to verify a look, you want to collaborate with the cinematographer, this is an important role that is not at all impacted by Camera to Cloud. We’re simply taking the DIT’s look and we’re allowing it to be distributed sooner, but it’s still their look because you can pass the camera into the DIT system and then pass the DIT’s look into the Cube. Just put the Cube at the end of the chain.
The other capacity they serve is downloading the original camera files, OCF. OCF cannot be delivered to the cloud yet because we just said, LTE and satellite technology is still too low bandwidth for original camera files or RAW files. Even ProRes, Avid DNx, or RAW files like Blackmagic RAW, Redcode, or ARRI RAW—those files are so big that you could never upload them over a hotspot today, but that will change over the next decade.
Today, a DIT managing the RAW data and managing the color are two key aspects that still need to happen on set. All we’re doing is speeding up the collaboration process with proxies so that people downstream of the set—in post any other area that needs to be introduced to it—they could just start doing their job sooner. It’s not a job elimination technology, it’s a job acceleration technology.
I think anyone that wants to be successful really needs to learn this new technology. Why am I flashing back to years ago when you first introduced Near-Set Dailies? Remember that? I don’t know why I’m thinking about that. For some reason, I’m visualizing it. Totally off-topic.
It’s not off-topic because Near-Set Dailies—a friend of mine, Laura Pursley, when she saw the announcement, she said, “Wow, Michael, this is something you’ve been working on for 10 years.” She’s right. This has been some of my life’s work because for a decade, I’ve simply hated this gap between production and post. It just doesn’t make sense. It’s so frustrating. If you think about it, when we shot on film, we transitioned to digital, but the turnaround times didn’t get dramatically faster.
Digital promised everything will happen faster, but it didn’t. The time it takes to take film, put it to a videotape, and then digitize it wasn’t that much different from shooting on videotape. Even shooting on files, downloading it, transcoding it, and sending it to editorial—it was still the next day. That word—dailies—was still a day with film and video. What Camera to Cloud does is it finally eliminates days, it makes it virtually instant. Even an hour-long take in the field can be available in under five minutes for everyone to start working with.
That is what I’ve been trying to be after because I found—as an entrepreneur in the post-production industry—all of these people are just waiting for media from the set. A huge group of people on big productions, it’s this army of people just waiting for stuff from the set. If you’re waiting for something, think about that act of waiting. Waiting in humans creates anxiety. If you’re waiting in line at Disneyland, it stinks because you don’t want to be waiting. If you’re waiting for cookies in the oven, you don’t want to wait for those either. You don’t want to wait for someone to respond to you when you have an important email that you need a response to. Waiting is an anxious problem.
When you have something as precious as footage being shot on a production, waiting for that to be looked at, reviewed, QCd, edited, changed, manipulated, and collaborated with, that is a period of anxiety that needs to go away. That’s been my mission—moving things on set for dailies, which we did. Light Iron actually started that at Plaster City even before that. When we started these processes, it was all about reducing the anxiety by increasing the collaboration and decreasing the time it takes for things to happen. We pioneered moving dailies to the set, which helped relieve some anxiety.
This is the holy grail approach of that. We wanted to simply relieve all the anxiety by making it instantaneous. We’re finally at the precipice of the true way Camera to Cloud should be working.
I’m OCD when it comes to workflow. I just find that the more you can be organized, the better it is on the other end, the more the whole team works with each other in sync. It’s waiting, and I’ve seen how fast this moves into Frame.io and I love it. The other thing is finding things. Let’s talk about naming conventions for a moment. There’s another aspect to this. Your audio files are going to be slightly longer in time than the camera files. How do they sync up? Timecode?
That’s actually a good point. The naming convention comment is actually really a good point because when you’re recording a proxy, that proxy needs to be named the same file name as the camera. The camera has a file name—a clip name—and you want the proxy of the same name. That name is really great when you’re running a database, but it’s really hard to search because you don’t know where anything is by the camera file name. In professional audio—and if you don’t do it this way I highly recommend it—these audio systems like Sound Devices and there are others, but you can name your files the scene and take or the name of the interviewee and the take version.
You name the file real quickly. You can do that before it’s rolling. You can even do that after it’s rolling. The idea is the file name of the sound asset is the file name of the asset. It becomes the indicator of the asset like let’s say, “Scene 12 Take 1.” That is the audio name. Now, Scene 12 Take 1 in audio might be A003, C001 in video. How is a director or anyone going to search A003? I’m looking for Scene 12.
What Frame.io does is when you jam your audio, which means you’re taking a jam cable and you’re jamming the audio into the camera, you actually make the camera and the audio have the same timecode. That cable only needs to be there for a moment, but companies make little lockit boxes. If you haven’t invested in lockit boxes, then you really should because it’ll really, really speed up your world because it’ll automate a lot of this.
What happens is you record your sound, it doesn’t matter if you start first or second. You record your picture, it doesn’t matter if you start record first or second. The audio and the video assets independently published into Frame.io, then Frame.io analyzes the timecode, it finds where they overlap, and it will stitch them together and then create a third asset which we call a title asset. Imagine in the cloud you see audio, video, and then you see AV.
An AV is audio and video together. Now it’s created a new asset, and that AV asset is now called Scene 12 Take One. If you shoot for several days, several weeks, or several months, you can start searching by scene and it will present you a mixed, muxed, synced AV asset, which you can search by scene. Nobody has to do anything, it’s all there automatically, and you don’t have to be manually doing that.
Now in post-production, you still should sync the audio yourself because you want to have separate audio and sound for your conform process. But when you’re reviewing things in the cloud, we’re doing that step for you, which can save a day or more in just waiting to review assets.
Yeah, at least. For client approvals and interacting with your studio executives, this is awesome. The timecode will match, the metadata will match. The naming convention is simple. You’ve eliminated the possibility of human error, which is a big part of it. When you have a large crew, sometimes people forget. They’re tired, they’ve been working long hours—this is awesome. How many tracks of audio can be embedded in the field that goes out?
Today, we handle eight with no problem. I think right now eight is the limit. I honestly can’t remember if it’s 8 or 16 right now, but 8 tracks are supported, which satisfies 98% of audio recording in the field. If you need more than 8 microphones, I think we may support 16. But for most people, eight channels or fewer is going to satisfy.
Now that doesn’t mean you can’t use Camera to Cloud, it means you can only get the first eight channels to the cloud. Let’s say you’re recording 16 channels of audio. The way to monitor that is a stereo mixdown. What field mixers will do is they will take all the audio and they will produce a rough mix so that directors and people can actually hear a stereo mix of what’s going on. If you have more than 8 or 16 channels, you can simply provide the stereo mix to Frame.io, which is a choice in a Sound Device’s 8 series recorder, and then you’re hearing the composite mix, which is ideal because now you don’t have to hear the phasing or the interference with eight channels and keep track of who’s where.
The sound mixer does a rough mix in the field that gets transmitted. Even if you had 32 channels and you’re recording a lot of musical instruments, you would still have a stereo rough mix. You could simply listen to the rough mix, which is probably honestly what you want to listen to if you’re doing a quick review anyway.
Absolutely. What about multicam clips? Does it work with that yet?
Absolutely. A Teradek Cube is a single-channel device, and that means it has one input. If you have one camera, you need one Cube. If you have three cameras, you need three Cubes, they’re just one to one. You can run the Cubes directly on the camera, you could put them in a video village, and you can wirelessly transmit to them through Teradek bolts, or you could run cables if you wanted to. But you have one Cube per camera. Each of those cubes has a serial number when it’s authenticated in Frame.io.
You use the Frame.io iOS app and you authenticate the Cube to your Frame.io account. Once it’s authenticated, you can change the name of that device. You could change the name to A camera, B camera, and C camera. Then when you’re in frame IO, every time A, B, and C cameras are rolled, you’ll see a video folder, and inside of that video folder, it’ll say ABC, and that’s where your assets live. They will always self-organize and publish to the directories they’re supposed to go to to keep things super organized.
If you have one audio recording and three cameras, it will sync the audio three times. In that AV sync directory—those title assets we just talked about—it will mux the same audio to all three cameras so that you can search Scene 12. You can see here’s A cam Scene 12, B cam Scene 12, and C cam Scene 12. That’s an easy way to have that automatic syncing process with multicam work in the background.
Oh my God. I feel like a fangirl right now. I am so excited by this. That also reminds me, talk about authorization and talk about security because a lot of our clients get very, very nervous when anything leaves the traditional editing room. I can think of a couple of studios in particular that just don’t want anything out of those rooms. What can we say to them to reassure them that this is secure?
Well, everybody’s different. Everybody has different pain tolerances, and you’re making a good point of where do people feel safe. It’s hard to tell someone who doesn’t feel safe, just feel good, feel safe. That’s not enough, they have to learn it. Again, we can look at history to teach us this. When films switched to tape, there were people that were so afraid of tape because they were taught. There was so much doctrine around film lasts forever. They didn’t like the idea tape.
A lot of people—when digital tape—came out said, we don’t want to use this technology because film lasts longer. Guess what, they got over it. There are ways to make sure that tape lasts too. There was a transition period for those people. When videotape started to give way to files, people were afraid of getting off tape because they said, “Well, I have a tape, I could put it on a shelf, I don’t like this file, and I don’t trust hard drives.” Guess what, they got over that too. They started being able to trust that process. There was a transition period.
In fact, in that transition period, for those of you who really remember this process, when people started shooting on videotape, we used to master films on videotapes—like HDCAM, or HDCAM SR tape—and then the studios would take that tape and they would film it out to film for archiving. They would back it out to film so that the archive, the master, was unfilmed even though we had finished and delivered a videotape. Ironically, the inverse of that happened on files.
When RED cameras came out, I was part of the early RED adopters and deployments in 2007. When we started doing that, the first shows we did, they made us take the RED camera files, which were on a CF card, and load those on the tape. We would get paid to make tapes of RED files because people were afraid of trusting hard drives. Eventually, that went away.
The same process is with the cloud. People may be afraid of it, but that’s because we’re in this transition period where they’re used to things on hard drives. The cloud is just really a hard drive. You asked, Cirina, what should we tell those people? There is one thing that everyone in the world—especially and including people that work in production—covet more than their own pictures that they shoot, and that’s their money. You covet your money more than anything. Money, all of it—not most of it, not some of it, not kind of sort of, not Monday, Wednesday, Friday—all money is managed in the cloud. All your credit card management is cloud-based, and it has been for decades.
If the industry around the world’s money and the bank that you use is all in the cloud, we’re simply adopting the same standards that the banking industry set decades ago for security to make sure that your stuff isn’t stolen. Now, when people say, but I’ve heard of identity theft. Well, identity theft is actually stealing the element to get in. That’s stealing the keys to get into your account. That’s not actually stealing and breaking down the account system in itself. People find ways to get in it. The same with the cloud with video.
If you give someone your password or someone cracks your password, that’s a different issue. But the approach of things being safe and secure in the cloud, the top of the line of that is money. If we can trust our money in our banks in the cloud, then we can apply those same principles to trust our video and our audio in the cloud. Again, that may be unfamiliar territory, that might create anxiety, but over time, that will go away. People will learn to just trust everything. They’ll learn to manage passwords better than they do. People today are pretty bad with that.If you can trust your money in banks in the cloud, then you can apply those same principles to trusting your video and audio in the cloud. Click To Tweet
Frame.io has a technique where we have a type of link that’s called a private link where not only can you not get into that project if you’re not invited into it, but if you share a link, that link will not work unless that person has a Frame.io account in your team. If you had a link that was given to you password-protected, you could still give someone the link and pass it on. But in Frame.io private mode, that link will not work unless they authenticate through your team with a Frame.io login, which means you would know them because all of a sudden you have their name and password.
We can also use session-based watermarks that actually burns into the picture your name, your IP address, your email address, and the date and time you’re looking at it. That becomes burned into the picture, and you can even have it travel. It doesn’t just burn in a corner, it moves across the screen.
If you’re really hyper-secure, it’s going to travel across the screen. These are additional layers of security that we put into this. Whether it’s Camera to Cloud or it’s a color-corrected master, Frame.io’s level of private security gives you the ability to govern who’s allowed to get in and who’s allowed to see things. That’s why we don’t have breaches that come through the system. It’s designed to be intentionally very restrictive.
When you set up the stream, don’t you have to authorize it to the recipients? You can actually put an end time on that. You can say okay, this shoot’s going to last for two days. I’m going to authorize this to be seen by the editors in LA or Australia, and then it goes away. Am I right about that? How does that work?
That’s absolutely right. Thanks for reminding me. Once you authenticate an appliance like a Cube or a Sound Device’s 888, that appliance might be rented. You’ve now given the keys to that device to publish into your account. That’s a one-way relationship. That device can only publish into Frame.io, it can’t pull anything out. Even so, that device could be rented. What we have is an expiration date.
The user can be anywhere in the world. You can be an administrator, you do not have to be on the set. The administrator can actually pause all devices from anywhere in the world, it can delete devices from anywhere in the world, or it can self-expire devices anywhere in the world. If you know you’re renting a device and your shoot is for five days, on the sixth day, you can set up when you initially authenticate it.
I want you to self-terminate on day six. Those appliances will delete themselves from your project, and no one else will be able to re-authenticate them because the authentication process is only open for 120 seconds. You only have two minutes to go through the authentication process, which is a code sort of like syncing your phone to a rental car, you get that code.
Once that code is closed, that device is synced to that account, and you can never re-sync it to that account without a new device authorization grant. As we said, you can terminate when that period ends. It’s a very restrictive by-design process. It’s very easy. It takes less than 20 seconds to authenticate a device to the cloud. Once that grant is issued, that device will shoot to the project or record out into the project, and then it never will ever again once you terminate that.
It’s awesome. That feels really good. Frame.io transfer works really fast. How’s that possible? It just seems like you guys are on high octane over there or something. I’ve got a one gig connection here. Obviously, COVID, thank you very much. Built a studio in the house. But I do have a one gig connection. Even so, the other services like Dropbox, it just takes forever. How is Frame.io so quick?
Thank you for the compliment. The Frame.io Transfer team is an exceptional group of people. Frame.io Transfer was born out of Thomas Szabo, who basically started building this rapid-speed upload and download tool. It came out in the summer of 2020. Now, it’s getting rapidly adopted. If you are not sure about signing up for Frame.io, what Cirina just said is the best reason—a Frame.io account will give you access to Frame.io Transfer, which is a Mac and Windows hyper upload and download tool. It is so good that it becomes the anchor for why a lot of people actually get into the Frame.io community because they really struggle with uploads and downloads.
The reason we’re able to do it so fast is truly optimization. Other companies that have to transfer data may not know what type of data they’re being asked to transfer. Since a lot of the world is moving documents, documents are moving disproportionately more than video. Video is the minority of assets that are moving. We know video is huge, but if you’re a company and you have to move a lot of assets, they’re probably small in nature. Most assets in the world are tiny.
When it comes to Frame.io Transfer, the optimization is for this community—the people that you and I know Cirina are the people that want to have a system designed for big files. That’s not what the other tools are designed to do. They’re just designed to move files. We have really been specific about what types of files we want to optimize for, how we’re chunking those files, how we’re moving them, and then how mean or grabby we are with your internet connection.
By that I mean when you deploy Frame.io Transfer, and you’re uploading a big file, if someone else in your house is playing Xbox, or they’re trying to watch Netflix, they’re going to get pissed because Frame.io Transfer is designed to grab and hog as much of the network as possible because it’s assuming this is a professional application. You have a professional need to leverage all the networks available. Frame.io will grab the network away from other devices. These other uploading and downloading tools don’t necessarily do that. They try to play nice and play fair, and they don’t overtake your network.
Frame.io Transfer is a hog, but that’s what pros want. You want your network to be deployed for a professional action of moving a high-quality file. Once those assets are moving, you can start a chain of transfers. You can change the orientation of them in the middle of the transfer. If a priority changes between several assets in progress, I can actually just drag and drop one to the top of the queue while it’s going and it’ll favor that one over the others. Little tricks like that are where the optimization for pros exists in the transfer world.
This group of people that are listening, this is who we’re trying to market to. This is who we’re innovating for. We’re not just trying to be a tool for everyone on the planet. We’re for professional video and audio collaborators. Those people have different needs. The other tools that move assets around in the world, they have to serve many masters. Maybe to our advantage, we only have to serve one master, which is the pro-audio-video community. We’re able to tailor our tech directly for you.
Remember when we were saying 4K files are going to be too big?
Oh my God. I like that Frame.io transfers the big kahuna. It says, move over, I’m in the house. I love that. That’s awesome. Talk to me about Colorfront and what Colorfront is doing in conjunction with what you have there.
For anyone that doesn’t know Colorfront, Colorfront is basically the world’s best transcoding program. It’s just an incredible group of engineers from Hungary that have really dialed in how efficient transcoding can be. If you’re a person that tends to shoot files and just drags them right into Premiere, Final Cut, Resolve, or even Avid and just work with them natively, you have no need for Colorfront. But if you find yourself constantly transcoding files and making versions of them, you’re probably using Adobe Media Encoder or Resolve to do these transcodes.
The problem with those tools is they are also single-channeled transcodes. You set up things and it outputs a version, then you set up new things and it outputs a version. What Colorfront can do is it can do multiple versions of a source and transcode them in multiple ways concurrently. It will grab your GPU acceleration and it will make a 4K, a 1080, an H264, a ProRes, and an Avid, and it will do with window burn, without, clean and dirty, with color, without. It will do it with matting, masking, or without matting or masking.
It can set all these up macroly. You can build a macro of what are the four outputs that I need to generate, you can set everything up in your project, and then push them all out at once. It’s a massive time saver because if you find yourself making transcodes and they are basically ubiquitous transcodes that are trying to serve three different masters, that is not ideal. Colorfront allows you to do all of those to make custom outputs at the same time. It will be faster because their acceleration is basically best in class.
The Frame.io integration now allows you to upload files into Frame.io—either Camera to Cloud or just from Frame.io Transfer—and Colorfront can now read Frame.io natively in the cloud. What that means is you no longer have to localize your assets to transcode them. Imagine you push things into Frame.io cloud when you want to transcode, change them, add color, window burn, watermark, and versions to make DPX files, EXR files, DNX files, ProRes files, or HEVC files, you can do that cloud to cloud.
Once it’s in Frame.io and it’s in Colorfront—the tool we use is called Express Dailies—Express is rendering Cloud to Cloud. You’re rendering it 3000 megabits per second because you don’t have to download the assets or upload them, it’s Cloud to Cloud. You can not only transcode and replicate these files super quickly, you don’t ever have to have them on a local disk so you don’t have to take up space, you don’t have to take up upload time or download time.
Colorfront is the first cloud dailies rendering tool. Once you start seeing the opportunities for that, you start to really see how workflow can change dramatically and how disconnected you can be from local storage because it’s a release. If you’re not limited by a local disk, all of a sudden, you have the freedom to work anywhere. You can have your dailies people working wherever they want to work and they don’t need a disk. They don’t even need access to the footage. It’s also more secure because they don’t have a copy of the footage because it’s hard to keep track of who has copies of this stuff. When they’re downloading, then you have a copy now.
In Colorfront Express cloud, you can just have Frame.io Cloud, Colorfront cloud render to the cloud, and there’s not another copy of your assets. It’s a very new way of workflow—very, very powerful.
My friends are going to be laughing at me because you have managed to convert me. I have four years that I don’t believe in the cloud, don’t talk to me about the cloud, it’s too slow, we don’t have the pipeline, I don’t trust it. Oh, Michael. I’m changing my mind.
That’s all true though. Everything you just said was true to a point.
Yeah, in the old days.
People used to say, “You’ll never get a reputable DP to ever shoot on a digital camera. Well, there was a time where that was accurate, but over time, that obviously changed. When you make a statement like, don’t talk to me about the cloud or don’t talk to me about this and that in technology, that may be accurate in a period of time. But it’s thanks to innovators, creatives, engineers, and all these people working together that these things evolve.
Like I’ve mentioned at the top of the program, LTE, 5G, Amazon, Web Services, Wi-Fi, Fiber, Fios, things like that. These are all evolutions that had to be foundationally improved before the cloud really started to get into the professional sector in the way we’re using it today.
COVID simply became the catalyst that accelerated a lot of technology development. The last 12 months probably saw more technological deployments and rapid developments than (I would say) at least in 20 years for digital technological change. This is another outside force acting in the professional world. That ultimately, at the end of the day—as sad and difficult this COVID has been—there’s going to be a bright side to the story. I think we’re starting to see that story emerge right now.COVID simply became the catalyst that accelerated technological development. Click To Tweet
Absolutely. I just have a couple of quick questions and I know I’m going to need to let you go because we’ve been talking for a while. What about versioning? I always worry when somebody alters a file. Say your editor gets a file and he alters it or does something to somehow change, either the metadata or just any aspect of the media, how do you handle the title, the name of the file? How do you handle versioning?
The number one thing is to understand the difference between a file name and a real ID or also known as clip name or clip ID. You usually don’t see the clip ID, real name, or clip name. You usually just see the file name, that file that you see. If you’ve ever been in Premiere, Final Cut, Avid, or Resolve, you notice that when you bring a clip into a media pool or a bin, you could change the name there to whatever you want, but you realize it doesn’t change the actual file on the desktop. That’s because you’re actually just changing the metadata inside the app, and you’re not changing the clip name, the clip ID, or real ID.
Sometimes, that’s also known as tape name. For those of you that look into the metadata of these files, you’ll also see a called tape name. Different companies use different syntax for that. But essentially, what it means is in Frame.io, you really don’t need to change any names. You really shouldn’t need to change names. The metadata inside is where you get it into a non-linear editor, and then you can change things whatever you want because you’re not really altering the tape name or real ID. That’s what a conform or relinking process is actually looking at. When you do a conform, it’s looking for the clip name, the real ID, or the tape name.
If you’re doing a relink, a relink is going to use the file name. That’s the name you can see. That’s why you don’t want to change that name because a relinking process is going to look at that. A conform is going to look deeper into the metadata. Basically, what I just described is a little bit of an antiquated leftover technique from videotape, broadcast, online, and things like that. What would be great is that if relinking systems also allowed you to relink by a clip name.
You can learn that some tools allow you to do that. It’s not as obvious, you have to dig a little deeper. Resolve has a panel, where you can relink by virtue of different aspects. If your file name is changed for some reason, you’re not necessarily screwed, you can relink by other ways. Not everybody has explored those ways. Obviously, best practice is don’t change the file name. If you do understand that there are other things that play, that will allow you to get back to the original real ID or tape name.
We actually produced a series called Frame.io Camera to Cloud Training series. It’s out in the month of April going forward. If it’s past then, you can see it now. There’s an introduction to post-production, which is episode 4 or 5. This really starts to go into the clip name, tape name, real ID, file name explanation. You should learn about that if it’s unfamiliar because in Camera to Cloud, this is a totally new workflow. It will require you to have a little bit more experience with things like that because you’re going to do more relinking than you’ve ever done, and you’re doing it in a way you probably haven’t done before.
There are some things you have to learn in order to figure that stuff out. We built the training series and we show you how to relink in Avid, Premiere, Final Cut, and Resolve. We teach you how to conform to those with Camera to Cloud files so that you have a roadmap on how you can make it work in your space.
We’ll definitely put a link to that in the show notes. Before we go, I just want you to look into your crystal ball and tell us where you think post-production is going in the future. What’s a post house going to look like 10 years from now?
That’s a really fun question. I love looking into the future. The first thing with the future is everything is going to be captured in the cloud. As unfamiliar today or as bizarre it would look to see a videotape—think of like a mini DV tape today, an HDV tape, a DVCPRO tape, or DVCAM—if someone showed up with a DVCAM camera today, you would probably be like, that’s what we’re shooting on? Because that tape cassette thing seems odd. Even in audio, think of DAT tapes or DA-88 tapes, that would seem really odd if someone showed up with something that was recorded that way. It still works, but it would be odd.
The thing that I’m getting at is that 10 years from now, cameras that have magazines, SD cards, and CF cards and CFast that you shoot on, it’s going to be as odd in 10 years to have a camera that has a card, a removable media as it is to see a DAT tape or a DVCAM tape. It’s going to be that bizarre in 10 years. Nobody will have removable media on cameras a decade from now.
Cameras may have an internal recording so that they have a cache, but they’re going to be designed to transmit everything that they’re shooting—including RAW files—directly to the Cloud. That’s how they work. We’re going to see this happen with the audio first. You’ll see fewer and fewer sound people will have SD cards that are removable. Eventually, it won’t be removable because it won’t need to, based on both bandwidth and the capacity of these will get so high you won’t fill them up, so they’ll just be an internal cache.
Video cameras will follow the same suit. There’s going to be a big change in the way we acquire things on the set over the next 10 years. In terms of post-production, that is going to change a lot about utilitarian jobs that the post house provides. In the post-production world, if you don’t know this, I used to own a post-production house and so I know the secrets of where do we make the most money versus where do we not. Where you make, your money is like the movie theaters. Where do the movie theaters make their money? Everyone knows it’s the popcorn.
That’s right. Except for right now, they don’t make money at all. Concessions are where they get it. They don’t make money off the films, they make it off of popcorn. You make a bucket of popcorn, you sell it for $11, and all of a sudden, you put $10.96 in your pocket. In post-production, the utilitarian jobs are the ones that make the most money. That would be daily syncing, transcoding, uploading, downloading, and making LTO archives. That’s where the most money in post-production is. It used to be in dubbing and making copies on videotapes. That was really high output dollars.
Where post-production companies don’t make money is in DI, in the actual art and process of color correction. It’s not that they don’t make money there, it’s that they make the lowest margin because those happen to be the most expensive processes of post-production because the operators that are involved there are really well-compensated. The monitors, the infrastructure for the rooms, projectors, and so on that they need are really expensive. You think about the square footage required—you have a big room, a huge cavity, and you have one person sitting in it—it’s tough to ROI that square footage.
What’s interesting is, the part of post-production that’s going to remain valuable to post-production is the creative services component. It’s the DI. It’s the stuff where you hire creative people to collaborate with you, but that’s the part today that doesn’t make the most money. Your ROI is not the strongest in that section. You make more money in the utilitarian parts. The utilitarian parts of post-production are all going to become virtualized over the next 10 years.
Frame.io is doing the first cloud-based automatic audio-video syncing. It’s not perfect today, but we’re taking pictures and sound and they’re going to the cloud. They’re automatically, instantly syncing, and presenting themselves to a user. That’s never been done before and it’s done right now completely without humans. That is the first step of total utilitarian automation.
Those are not creative steps. Syncing audio isn’t a creative thought. You don’t creatively think, how should I sync this audio? It doesn’t work that way, it’s demonstrative. The demonstrative parts of post-productions are the elements that people are going to have to look out for because that’s the stuff that the companies like Frame.io and others are going to start to build automation. You have AI components that are going to make it smarter and smarter and smarter. As we know how AI works, the more you plugin to it, the more it can get better at the process that it’s asked to do.
We’re going to see a lot of utilitarian elements get better and better, more automatic. If it’s automatic, it gets cheaper. If it’s cheaper, it scales faster. It’s the cycle, it’s this feedback loop that basically makes parts of post-production raise to the Cloud once you see these tipping points start to happen. They all will start to go. Camera to Cloud is the very, very beginning of this. This is not an industry-standard right now. It is not a post-production house killer. I believe it’s the start of a 10-year journey going forward where we will start doing everything virtually.
These things like Colorfront that I talked about that’s a virtualized dailies component. Parts of dailies are utilitarian, like syncing or naming files, parts of them are creative like applying color correction and tweaking. Those parts of the process are going to see a little bit of AI integration. Maybe for dailies, AI color correction is satisfactory. For a lot of people, that will certainly be enough. AI color correction is going to be really cheap because it scales and scalable things are less expensive.
This is where post-productions are going to have to evolve again. Fifteen years ago, the post house was phased with the digital revolution and most of them were anchored in film. Film was a very, very lucrative element of the post-production industry. When it went away, they had to make massive changes in order to stay relevant. Some post houses never made it to the digital transition. Many of them died.
In 2007, 2008, 2009, and 2010, we probably saw the largest percentage of post-production closes in Los Angeles in history, huge percentages. I was at all the auctions of those houses and I bought their gear because they had good gear but they didn’t have any business. Their models changed underneath them. The recession had something to do with that too. Many people grew out of that and built new infrastructures. Some of them pivoted, survived, and thrived in the digital world.
Now, this is not the digital world we’re going into, it’s the cloud world. It’s a new era—the film era, analog era, digital era, and now the cloud era. Don’t think of the cloud as just an offshoot of the digital era because even though the files are encoded digitally, the way we interact with them is not at all familiar to traditional digital interactions. Most digital interactions are local media, local hard drives, local sends, local deliveries. There’s a lot of physical present interactions in the digital space. The cloud makes it omnipresent and therefore, there are fewer interactions in a physical form, it’s virtual.
As you see that virtual take-off, you will see major changes in infrastructure and automation. The post-production community is going to have to respond. They will still be relevant in the future. I will never bet against post-production, but I will bet against antiquated models, always. Today’s model is going to become antiquated, just like yesterday’s model became antiquated, the model before that became antiquated.
People that didn’t get on digital audio—I was friends with a mixer who refused to do digital audio. He said analog is better and he had a book the size of Moby Dick proving that out but it didn’t matter. Even if he was right, it didn’t matter because the trend tipped and digital audio put him out of business. If you go back 100 years, when sound hit pictures, there were people that said the audiences can’t handle picture and sound or the theaters will never retrofit their stuff to add speakers. Some theater owners and chains said, we’re never going to add audio to our theaters, and they went out of business too. Some of them thrived, some of them died.
It’s all about antiquated modeling that has to constantly be evolving, and the cloud will change everything. If your business is in business and it’s not deploying to the cloud or not leaning on it, you will not survive. The cloud will change every single business on the planet, every single one. There is not a business that the cloud will not touch or completely reinvent. If your business is not centered around the cloud today, you have two choices—get on it now or get on it later.The cloud will change every single business on the planet. There is not a business that the cloud will not touch or completely reinvent. Click To Tweet
You don’t have a choice to not get on it. That’s a choice for extinction. I guess that is the third choice. Some people choose door number three. Door number one is get on it now, door number two is get on it later, door number three is wait and see what happens. You don’t want to be last to the party. This is a transition that you want to get proficient at sooner than later because it will help you see the scope of the market and maybe you can thrive too.
I think we are sitting on a milestone in technological history. I’m going to call you ten years from now, and we’re going to reminisce about today.
There will be a lot of news. Where do people go? Where do you want them to go to learn more about Frame.io, Camera to Cloud, Michael Cioni? Where do they go?
I’m easy to find on the internet. You’re always welcome to reach out. Also, people can email me. My email is firstname.lastname@example.org. Frame.io is an easy URL to remember. The Camera to Cloud information is frame.io/c2c. It’s very easy to find Camera to Cloud and get information about it.
We are publishing a lot of documentation. There are support documentations, there are videos, there’s training, there are testimonials because we’re trying to light a fire to an entirely new workflow paradigm. This isn’t a feature. It’s a paradigm. This is a completely new way of all workflow and it just starts with Camera to Cloud. We need everyone to feel that there are resources to learn.
If this sounds cool but it’s unfamiliar, congratulations, you’re normal because this is new and unfamiliar. We’re figuring it out. I think I said, we’re building the bridge that we’re standing on, but that’s my style. I like that. I like living on the edge. We are engaged with the community, getting your feedback, and then providing that feedback as much in support documentation so that you can learn and prepare how to make a Camera to Cloud workflow ideal for the way that you prefer to work.
Thank you so much for being with us. That’s Michael Cioni. I’m Cirina Catania and I’m about to sign off. Before I go, remember what I always tell you guys, get up off your chairs and go do something wonderful today. Actually, Michael brought something to mind.
If you are of a mind to do it, you can go on Fandango, you can search for a movie, and you can actually go to the theater—at least I can here in San Diego. I know there are many cities that are doing it now across the country. Try to see them in a theater. It really makes a difference out of respect to the filmmakers and the teams that put those movies together. Let’s support our theaters and hope they stay in business.
Michael, thank you for being with us. I’m signing off. We’ll talk again very soon.
- Michael Cioni
- Michael Cioni – Facebook
- Michael Cioni – Instagram
- Michael Cioni – Twitter
- Michael Cioni – LinkedIn
- email@example.com – Email
- Frame.io – Facebook
- Frame.io – Twitter
- Frame.io – Instagram
- Frame.io – LinkedIn
- Frame.io – YouTube
- Frame.io Camera to Cloud
- Frame.io Camera to Cloud Training Series
- Frame.io Transfer
- Lights. Camera. Cloud. – YouTube video
- Laura Pursley
- Thomas Szabo
- Moby Dick
- Adobe Media Encoder
- Adobe Premiere Pro
- Blackmagic Design
- Blackmagic Design DaVinci Resolve
- Colorfront Express Dailies
- Final Cut Pro
- Light Iron
- Logitech BRIO
- RED Helium
- Sound Devices
- Sound Devices 8 series
- Sound Devices 888
- Sound Devices Scorpio
- Teradek Cube
- Teradek Cube 605
- Teradek Cube 655
- Apple ProRes
- ARRI RAW
- Avid DNx
- Blackmagic RAW
- SDI Camera
- Have a clearly defined video content strategy. Align your video strategy to the pain points or challenges of your audience to ensure your content is relevant.
- Create a viable post-production workflow for your team. You should have a plan on how to move your files and assets efficiently and safely through your workflow’s pipeline.
- Invest in the right collaboration equipment and tools for your team. With a laptop and a reasonable internet connection, you can start collaborating with your post-production team. However, having the right equipment and tools will ensure that your project will be efficiently and professionally done.
- Manage your communication effectively. You should have communication tools, like Slack, to communicate with remote team members. You should not only rely on email to communicate with your team.
- Have a centralized project management platform for your team. You can opt to use Asana or Trello or any project management tools to organize your team’s deliverables.
- Use Frame.io to efficiently collaborate with your post-production team. Frame.io handles nearly every post-production file format, automatically generates proxies, and integrates with all of the major NLEs. It’s very quick to get up and running in your existing workflow.
- Build the regular habit of movement. Working from home can make you sedentary. Step away from your computer for at least 15 minutes to get some fresh air and sunlight, and collect your creative thoughts. Also, encourage your team members to take these breaks.
- Manage distractions. There will be a lot of distractions while working from home. Discuss these with the members of your household. Decide when you can and cannot be interrupted. Also, block all of the various sites and apps that steal your attention.
- Always encourage your team members. Your mental health and the mental health of your team are the cornerstones of your creativity and productivity. Don’t take them for granted.
- Visit Frame.io to learn more about its latest innovations available. Also, check out its new technology, Camera to Cloud, at frame.io/c2c.