Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Announcements and Chat > General Discussion

Reply
 
Thread Tools Search this Thread Display Modes
Old 11th December 2019, 02:52   #1  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Encoding my first movie (what I learned)

Last month, the very first movie I encoded premiered at the cinema.
I was very happy with that, especially 'cause it comes after I asked my boss to work on something that wasn't sport or news a year ago.
Today I wanna make a few points about what I learned from this whole experience. Many of you are well known encoders and engineers, but hopefully my words are gonna be useful for some people approaching the encoding world for the very first time.



1) Shooting
Although there are contrasting opinion about this, I strongly suggest you to always shoot in Log with a camera that has a decent amount of stops. It doesn't matter whether you're gonna go to BT709 SDR or BT2020 SDR or BT2100 HDR HLG or PQ, shooting in Log will almost certainly satisfy your needs. The reason why I say this is that you expect camera operators to be expert and know their shit and many of them are indeed expert, but it's very easy to mess things up and if you don't shoot Log, you have virtually no margin to adjust the scene in post-production. The director may or may not like it and, you know, recording a scene again from scratch because of a mistake made by an actor is understandable, but shooting it twice because of a "technical" mistake/imperfection is not, so...
Anyway, shooting Log also allows you to capture as many info as the sensor of your camera can get, which basically means that blacks are not gonna be crushed and whites are not gonna be clipped out, so you can make the best of it. Besides, shooting Log will probably save you from errors like overshooting and so on (I did received slightly overshoot footages sometimes, but thankfully, since it was Log, it didn't look terribly awful once graded).


2) Audio Recording
There are countless things that can go wrong; many movies are recorded with audio from the very top, which is fine, but if you are making a documentary like in my case I would totally go for the microphone on the chest of each and every person interviewed. It doesn't matter if it looks awful, the quality of the audio is gonna be so much better, it's gonna be crystal clear. If people touch it by mistake or move their head up and down while they talk, don't don't don't say "It's fine, I'll fix it" 'cause in the end you'll never do it and it won't be nearly as good as it could have been if it was recorded properly, so make yourself clear and make them understand that it's really important that they don't touch it, that they don't move their head up and down and that they don't rub their chest. Of course, always listen to it while it's recording live, record it lossless AT LEAST in PCM 48'000Hz 24bit and make sure everything is fine by bringing along a monitor like the ones made by Tektronix.


3) Editing
My preferred software for editing is Avid Media Composer, which is also used by many famous companies like HBO, Marvel, Lucasfilm and so on. It's been developed for years and it has a lot of nice tools to edit a movie. Anyway, it's really important that you DON'T edit the masterfile directly; this is because it would be extremely hard as it's probably going to be a huge file. If your camera supports a lossless mode like .dng, use it. I often call .dng files "the elephant". Although you might laugh at it, the reason why I call it like that is that it's huge and impossible to handle. Most lossless recording modes save the file as a sequence of images, like literally every frame is stored individually, which makes them huge and impossible to handle. If you are working with work-spaces (i.e shared network raid storages), you almost definitely have not enough bandwidth to do anything with such a huge file. This is where the encoder comes in and makes a lossy mezzanine file which is meant to be used for editing. You can use whatever you want as a mezzanine file, just make sure the bitrate doesn't skyrockets so I suggest you to have a constant bitrate one. The quality of the mezzanine file doesn't matter, it doesn't have to be perfect because you're gonna re-link to the original lossless file once you have done your editing, so it's literally just a temporary one. Once you finished your edit, enable the dynamic relink option in AVID, re-link to the original lossless file and export the .aaf file with rendered effects if they need to be rendered. An .aaf file is simply going to be a metadata file which will allow the sequence you edited in the timeline to be used by another editing programme like Davinci Resolve. Of course, your secondary editing programme won't have all the effects of the one you used, so some of them will have to be rendered. If you are going to render effects and you can't afford uncompressed lossless, make sure you choose something with a resonably high quality like DNXHR 4:4:4 12bit. I like it 'cause it has a very high bitrate, it's 4:4:4 and 12bit should be enough to avoid any sort of banding. If you're going to use different mezzanine files in your workflow, please please please make sure that they're lossless but if you have to go lossy, it's mandatory that you use the same codecs or very similar codecs (i.e codecs that use the very same transform and have similar parameters). This is because different codecs with different transforms cut off different kind of frequencies, so what you end up with has the worse of one and the other.


4) Color Grading
My de facto choice for this is Davinci Resolve. It's no doubt the best software to do color grading. It's very easy to make masks in Resolve, so don't be afraid to use them to individually grade objects or different parts of the background of different parts of the scene like the body of a person and so on. Don't worry about motion, it has a built-in analyzer which will convert blocks and macroblocks into frequencies, work in the frequency domain by assigning to them a value (literally a number) and make vector-motion calculations for you to keep track of moving objects within your mask. Another great tool is the Face Refinement which works great for many scenarios and it can help you in many occasions. As to the log I talked about, if you wanna go SDR you can go SDR, if you wanna go HDR and your camera has a decent amount of stops, you can go to HDR, but please if you're going to HDR (either HLG or PQ) and you also have to make the SDR version, please don't be lazy, grade the HDR version first and then re-grade the entire movie in SDR. The reason for this is that if the image you shot in Log had let's say a building and the sky, you're gonna be able to keep both actors walking in front of the building and the blue sky with the clouds. The clouds of a sunny day are gonna be around 1000 nits and you're easily gonna retain all the shadows made by the sun of actors walking in front of a building and the details on them. When you grade in SDR though, all you have is 100 nits, so it's worth making a mask, bring the sky down to 100 nits, grade the actors differently and try not to crush the shadows too much. The thing though is that in order to retain the details of the sky and the clouds, those have to be brought down to 100 nits and the tonality of it in BT709 at 100 nits will look very differently from how it looks in BT2100 HDR (either HLG or PQ). This is because if you record the very same scene in BT709 SDR 100 nits directly, you would have the sky completely clipped out, like completely white, no blue, no clouds, just horrible uniform white. This is because you don't have enough nits to record it, so anything beyond 100 nits will just be considered as pure white. When you record it in log, however, you have as many nits as many stops the sensor you are using has, so the sky could easily peak at 1000 nits with all the information perfectly recorded. So... how do you bring it to 100 nits without clipping it out? Well, with a mask, but (and this is the point I'm trying to make) it will look anything but natural because you're "forcing" something to be where it shouldn't, like you're darkening on purpose. The director may or may not like it, so it's very important to grade both the HDR and the SDR version starting from the original Log mastefile using Davinci Resolve with the .aaf metadata file created by Avid Media Composer.
For instance, this is how a sky that peaks at 1000 nits looks at 100 nits SDR BT709: Link

Do you see something weird? Of course you do! The tonality and the brightness of the sky is completely different from the one you would see with your naked eye. It would be very very bright and almost unwatchable with your naked eye, but here is definitely watchable and almost relaxing to watch 'cause it has been brought down to 100 nits in order to be displayed on SDR monitors, so this is the compromise you have to make and it's something that it's always done in movies.

For instance, this is what you would get if the very same scene is shot in BT709 SDR 100 nits directly (left) or in Log 1000 (or more) nits and then graded to 100 nits:




I'm not gonna say anything about the best practices of grading and so on, just make sure everything is consistent and that you're respecting the Limited TV Range all the time, even if you're working with 12bit precision and output (256-3670) with a proper waveform monitor. In HDR also keep in mind at how many nits you're going to grade and how it's related to the stops of your camera... and please use HDR the proper way, which is NOT to make over-saturated mind blowing super huge mega colors and contrast, but it's to represent scenes with a high range so that it's as close as possible to the natural light viewed by the human eye.


5) Post-processing with Avisynth and final encode
There are many different frameservers, but Avisynth is the best one and it has so many different plugins that it's so useful that it can and must be used on professional productions. For instance, I used it to restore and do the upscale and frame-rate conversion of each and every archive footages for documentaries. Another time I used Avisynth was to do a very light denoise to the final version exported by Davinci which was a DNxHR 4:2:2 12bit. The reason why I applied denoise is because the final file which is gonna be delivered to users like in the BD has a way lower bitrate than the exported final masterfile, so the noise caught by the camera would turn into random crap averaged out by the codec. At the very beginning I mentioned log footages and I gotta say that although log is great, whenever you shoot Log you get a lot of noise, like tons of it. The reason is that you get exactly everything, literally everything that the sensor can get and part of this is noise which isn't supposed to pass through once is debayered and saved without such a hugely wide color curve. Besides, since you gotta encode different versions, Avisynth can very well handle masterfiles and you can use it as an input to encode with different codecs.

5.1) Delivery to UHD BD
Avisynth can be used as an input to x265 which can be used to encode official UHD blurays. Of course you gotta respect the standard and make sure that everything is correctly set, but once you do that, you're ready to go. I gotta say that the reason why I use it is that x265 is better than other professional encoders like Ateme and provides better quality at the same bitrate.

5.2) Delivery to FULL HD BD
Avisynth can be used as an input to x264 which can be used to encode official FULL HD blurays. x264 is very reliable and has a way better grain retention than many professional encoders and it also scores higher with metrics like ssim and psnr (vmaf wouldn't be of much use with such an high bitrate), so it's my de facto choice.

5.3) Delivery to Cinema
The Cinema is a little bit trickier, 'cause theaters use DCP (Digital Cinema Package) which is basically a Motion JPEG2000 4:4:4 12bit planar in the XYZ color space. The tricky part is indeed to bring it to the XYZ color space correctly. There are many different ways of doing this; for instance, you can make your own LUT according to both mathematical calculations and your taste and use the Cube plugin inside Avisynth, or you can use HDRTools inside Avisynth. Please note that even though your final output would be 12bit, it's very strongly suggested that you do the conversion with 16bit planar precision and then you dither it down to 12bit for output. Also please note that Avisynth doesn't officially support XYZ as it's considered an intermediary color space and not a final/delivery color space, so the stream is gonna be flagged as RGB even though it's XYZ. In other words, don't worry if you see colors messed up in like AVSPmod and so on. Just make sure that whatever you use to decode the uncompressed Avisynth input (like ffmpeg) is aware that you're sending out an XYZ and NOT an RGB (even though it's flagged that way). Once you do that, you can use ffmpeg to encode each and every frame as .tiff and then OpenDCP to encode them as JPEG2000 and then mux them together in an .mxf file. Of course OpenDCP has a series of built-in converters that can also take care of the conversion if you make .tiff in - for instance - YUV. Honestly, it really depends on you, your eyes and your taste.

Last edited by FranceBB; 15th April 2021 at 23:51.
FranceBB is offline   Reply With Quote
Old 11th December 2019, 02:52   #2  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
6) Test at the Cinema
Generally they choose a cinema and they send you there to test the encoded file first. There're going to be a few people together with you and you're gonna see how the encoded file is. Don't be scared, if you see a lack of details at the cinema, it's fine, it's the way it is. It doesn't matter whether your entire production was in 4K, maybe downscaled from 6K and super sharp, at the cinema is never gonna look as sharp as on your 50'000 bucks studio monitor... and no, it's not because of the Wavelet transform used to encode the MJEGP2000, it's because it's the effect of the projected video. On the other hand, though, movements like a pan and so on are gonna look very smooth at the cinema, way smoother than on your screen, even if it's a 23.976fps (24p). Why? I can't tell. Honestly, I have no idea why it's smoother, like more fluid once it's projected at the cinema than it is on your monitor, but it's just the way it is. Maybe is because of the thousands of tiny little holes on the projector cloth or perhaps is because of the material the cloth is made or perhaps is because the image is projected and not displayed so the light is passive and not active and someway the persistence of vision (the light on the eye) is different or perhaps because monitors force you to a certain frequency by duplicating the frames and they do it in some bad ways... I don't know. I can definitely tell you that it's smoother at the cinema and that it's not as sharp, it's kinda "soft". As to the technical part, they're gonna let you meet the projectionist and they're gonna "ingest" the DCP you encoded. Some of them prefer a particular kind of drive which is also very expensive, but most of them will be happy with a normal hard drive as long as they can copy the DCP on their video server, it's just that it's gonna be slow as hell. Oh, a side note, the sound is gonna be totally different from how you expected it to be, 'cause you're in a cinema, there are several different speakers and it's empty.


7) Movie Premiere
The movie premiere is probably the best part for most people but not me. An hour or so before the movie they invite all the actors and all the people who worked at the movie (including the technical staff) is gonna be there to have a toast. I gotta say that it's not something I fit well into, 'cause, you know, they are actors, they're very posh and I'm kinda not, like, I'm really a normal person and I think that other encoders and all the technical staff is pretty much like me, like not so posh, so I was a bit uncomfortable. My suggestion is to try to act natural and just don't mess up. I tried to smile at everyone and be calm... but I was definitely nervous, especially 'cause I was like "what if it's gonna stop? What if it gets corrupted? What if they don't like it?" and so on and I was thinking about all the issues I noticed during the whole movie and I was like "are they gonna notice this? are they gonna notice that?" and so on.
After the toast, everybody is gonna head where the movie is gonna be projected and that's it, it starts to play. When it plays you're gonna notice that the audio is different 'cause the room is packed and there are many people in there. One other thing is that people are gonna watch the movie differently, like they're going to smile or laugh at things and you may be thinking "OMG we had to leave a bit more space there for them to laugh, I didn't expect that" and so on and whenever you see something that you consider an issue like something "bad", most of the people are not gonna notice it and won't just say anything. When the movie stops playing, people generally clap and that's gonna be the biggest relieve and the biggest reward, or at least it was for me, as well as seeing my name as encoder during the credits as I was telling StainlessS the other day here on Doom9. :P



Alright, that's pretty much it. I wanted to share my experience in case it's useful to anyone else and also because I'm so happy right now... like I really am and I hope I'm gonna have other chances to work at other productions.

Last edited by FranceBB; 28th October 2023 at 00:14.
FranceBB is offline   Reply With Quote
Old 11th December 2019, 10:40   #3  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Good post FanceBB,
really very interesting, although I did not have a clue what you were talking about for a lot of the time.
Your trepidation was palpable at the Movie Premiere, the term Nerve wracking comes to mind.

That there monica you got is one helluva mouthful, "Francesco Bucciantini Livio Aloja", very Germanic

Anyways, hoping that there are many more nerve wracking episodes for you to endure, good on ya.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 11th December 2019, 11:21   #4  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Quote:
Originally Posted by StainlessS View Post
Good post FanceBB,
really very interesting, although I did not have a clue what you were talking about for a lot of the time.
Your trepidation was palpable at the Movie Premiere, the term Nerve wracking comes to mind.

That there monica you got is one helluva mouthful, "Francesco Bucciantini Livio Aloja", very Germanic

Anyways, hoping that there are many more nerve wracking episodes for you to endure, good on ya.
Haha well actually I'm just Francesco Bucciantini, the other encoder I worked with is Livio Aloja, which is also my boss (he's a different person xD).
FranceBB is offline   Reply With Quote
Old 11th December 2019, 11:27   #5  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Arh well, aint that typical, how much effort did your boss put into the encoding.

[If you just say 'lots' then I'll understand ]
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 11th December 2019, 19:09   #6  |  Link
SeeMoreDigital
Life's clearer in 4K UHD
 
SeeMoreDigital's Avatar
 
Join Date: Jun 2003
Location: Notts, UK
Posts: 12,219
Quote:
Originally Posted by FranceBB View Post
5.3) Delivery to Cinema
The Cinema is a little bit trickier, 'cause theaters use DCP (Digital Cinema Package) which is basically a Motion JPEG2000 4:4:4 12bit planar in the XYZ color space. The tricky part is indeed to bring it to the XYZ color space correctly. There are many different ways of doing this; for instance, you can make your own LUT according to both mathematical calculations and your taste and use the Cube plugin inside Avisynth, or you can use HDRTools inside Avisynth. Please note that even though your final output would be 12bit, it's very strongly suggested that you do the conversion with 16bit planar precision and then you dither it down to 12bit for output. Also please note that Avisynth doesn't officially support XYZ as it's considered an intermediary color space and not a final/delivery color space, so the stream is gonna be flagged as RGB even though it's XYZ. In other words, don't worry if you see colors messed up in like AVSPmod and so on. Just make sure that whatever you use to decode the uncompressed Avisynth input (like ffmpeg) is aware that you're sending out an XYZ and NOT an RGB (even though it's flagged that way). Once you do that, you can use ffmpeg to encode each and every frame as .tiff and then OpenDCP to encode them as JPEG2000 and then mux them together in an .mxf file. Of course OpenDCP has a series of built-in converters that can also take care of the conversion if you make .tiff in - for instance - YUV. Honestly, it really depends on you, your eyes and your taste.
Hi FranceBB,

Out of interest... What resolution did you make your DCP?


Cheers
__________________
| I've been testing hardware media playback devices and software A/V encoders and decoders since 2001 | My Network Layout & A/V Gear |
SeeMoreDigital is online now   Reply With Quote
Old 11th December 2019, 19:41   #7  |  Link
Tadanobu
Registered User
 
Join Date: Sep 2019
Posts: 37
Fascinating post. Thanks a lot for sharing your experience.
Tadanobu is offline   Reply With Quote
Old 11th December 2019, 20:09   #8  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Quote:
Originally Posted by SeeMoreDigital View Post
Hi FranceBB,

Out of interest... What resolution did you make your DCP?


Cheers
The final exported masterfile was 4016x2160p, however DCP standard are very picky and only allow Flat or Scope, namely:

Flat (3996 × 2160) = 1.85 LB
Scope (4096 × 1716) =2.39 LB

4016x2160 is around 1.86 LB, so I cropped a little bit (20 pixels) to get 1.85 LB 3996x2160. As to the UHD and FULL HD version I had to go to 3840x2160 and 1920x1080 which are 1.77 or 16:9, so I just used AddBorders() 'cause otherwise I would have cropped too much. The bitrate of the MJPEG2000 for the DCP was bumped up to 250 Mbit/s. When I had a word with the projectionist about the fact that it looked soft even though it was 4K, he basically said that many people don't even notice the difference between 2K and 4K at the cinema and that such a "softening" effect compared to our monitor was normal.

Quote:
Arh well, aint that typical, how much effort did your boss put into the encoding.

[If you just say 'lots' then I'll understand ]
Ah no, no, he helped me on a few things, he has experience with DCP and MJPEG2000, so he helped me writing a .bat to take care of that part as well.
He's a very nice person to work with and he's also registered to Doom9 (of course), although he mainly just lurks.
The boss of my boss however... [I'm not gonna say anything].

(Out curiosity, we have a rank based system in which every employee has a level and can rank up; it goes from 1 to 10 where 10 is the maximum. I'm just a level 4).

Last edited by FranceBB; 11th December 2019 at 20:18.
FranceBB is offline   Reply With Quote
Old 25th March 2022, 19:30   #9  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by FranceBB View Post
6)...On the other hand, though, movements like a pan and so on are gonna look very smooth at the cinema, way smoother than on your screen, even if it's a 23.976fps (24p). Why? I can't tell. ...

One of the reasons is that protection is at 72Hz (or more), so each frame is displayed 3 times which itself makes 24p "smooth".
Other than this I think it's just to the different display tech. Projection has no inherited lag like eg. LCD tech has (I assume).

You could for example get quite smooth 24p on old Pioneer Kuro plasmas (very nice TVs btw.) as it was also using 72Hz refresh rate in case of 24p source. I assume this could be done on today's good TVs with 120Hz panels as well, but I'm not sure if any manufacture done it well. What I've seen was not so good.

Last edited by kolak; 25th March 2022 at 20:03.
kolak is offline   Reply With Quote
Old 25th March 2022, 19:52   #10  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by FranceBB View Post

5.3) Delivery to Cinema
The Cinema is a little bit trickier, 'cause theaters use DCP (Digital Cinema Package) which is basically a Motion JPEG2000 4:4:4 12bit planar in the XYZ color space. The tricky part is indeed to bring it to the XYZ color space correctly. There are many different ways of doing this; for instance, you can make your own LUT according to both mathematical calculations and your taste and use the Cube plugin inside Avisynth, or you can use HDRTools inside Avisynth. Please note that even though your final output would be 12bit, it's very strongly suggested that you do the conversion with 16bit planar precision and then you dither it down to 12bit for output. Also please note that Avisynth doesn't officially support XYZ as it's considered an intermediary color space and not a final/delivery color space, so the stream is gonna be flagged as RGB even though it's XYZ. In other words, don't worry if you see colors messed up in like AVSPmod and so on. Just make sure that whatever you use to decode the uncompressed Avisynth input (like ffmpeg) is aware that you're sending out an XYZ and NOT an RGB (even though it's flagged that way). Once you do that, you can use ffmpeg to encode each and every frame as .tiff and then OpenDCP to encode them as JPEG2000 and then mux them together in an .mxf file. Of course OpenDCP has a series of built-in converters that can also take care of the conversion if you make .tiff in - for instance - YUV. Honestly, it really depends on you, your eyes and your taste.
For this you could just use Resolve's own NR (which is not that bad) or use NeatVideo plugin. Then you export to DCP package directly with Resolve own exporter (its KAKADU based JPEG200 encoder is not bad nor slow) or use easyDCP plugin which is used by maaaany.
I see 0 point for any "external" workflow as in this case you are not going gain much (if anything or even got into problems instead of gains). Resolve works in 32bit float on GPU so you don't have to worry about precision.
Someone could also edit in Resolve on native camera files (with good machine it's not a problem even with 8K RED RAW source) and skip whole multi tool need. It can save time, money, storage etc.
I would say you had bit "convoluted" approach but if you felt more confident with it then why not

Resolve may be relatively easy, but it's not best grading tool (specially for complex multi camera projects).

Last edited by kolak; 25th March 2022 at 20:04.
kolak is offline   Reply With Quote
Old 26th March 2022, 01:01   #11  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
I'm guessin' that Kolak could give a much more precise and interestring account,
go-on K, givus all U got !
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 26th March 2022 at 01:04.
StainlessS is offline   Reply With Quote
Old 26th March 2022, 19:06   #12  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
What more to say?
Avs/vs can be very handy sometimes, but in other cases there is simply no point using it. It's more useful further in the chain, when masters are already exported and they are typically already YUV based.
High-end finish is definitely one of such a cases. AVS doesn't really offer much there. Resolve thanks to its price, became an easy choice when you need to do some high-end project (or project which starts with good camera assets and needs color management).
With Resolve you don't need to worry about precision (everything works in 32bit float unless you do direct exports), once you know basics you can also work easier with color management. Things like XYZ exports just work (still better if you know what your are doing). You have few exporters which typically can cost quite a lot (eg. IMF, DCP). It's not perfect tool, there may be some bugs, but it can be useful. BM forum is very active and many important bugs been already fixed thanks to users reports (it's almost like doom9). You have even more complex and polished tools for high-end work, but those (still) costs in big $K (eg. Flame, ColorFront tools, Mistika, etc.).

Tools are not that relevant today. More important is your experience and knowledge. Old days you had to use 1MLN$ systems to do high-end finish, today it's very different (there may be only some exceptions eg. DolbyVision or reference HDR monitoring).

I started at doom9 (as pure hobby) and then DVD/BD with masters preparation (avs was useful here), high-end finish and now complex systems for high volume streaming delivery (avs may help but typically ffmpeg is enough). Worked in small and huge companies in UK (mainly London). I know both worlds (open source and pro+a bit of Python which is so important today) very well, so can pick optimal tools/workflow, which today is big part of your potential success.

Last edited by kolak; 27th March 2022 at 17:45.
kolak is offline   Reply With Quote
Old 10th April 2022, 12:20   #13  |  Link
Jamaika
Registered User
 
Join Date: Jul 2015
Posts: 697
Quote:
Originally Posted by FranceBB View Post
3) Editing
My preferred software for editing is Avid Media Composer, which is also used by many famous companies like HBO, Marvel, Lucasfilm and so on. It's been developed for years and it has a lot of nice tools to edit a movie. Anyway, it's really important that you DON'T edit the masterfile directly; this is because it would be extremely hard as it's probably going to be a huge file. If your camera supports a lossless mode like .dng, use it. I often call .dng files "the elephant". Although you might laugh at it, the reason why I call it like that is that it's huge and impossible to handle. Most lossless recording modes save the file as a sequence of images, like literally every frame is stored individually, which makes them huge and impossible to handle. If you are working with work-spaces (i.e shared network raid storages), you almost definitely have not enough bandwidth to do anything with such a huge file. This is where the encoder comes in and makes a lossy mezzanine file which is meant to be used for editing. You can use whatever you want as a mezzanine file, just make sure the bitrate doesn't skyrockets so I suggest you to have a constant bitrate one. The quality of the mezzanine file doesn't matter, it doesn't have to be perfect because you're gonna re-link to the original lossless file once you have done your editing, so it's literally just a temporary one. Once you finished your edit, enable the dynamic relink option in AVID, re-link to the original lossless file and export the .aaf file with rendered effects if they need to be rendered. An .aaf file is simply going to be a metadata file which will allow the sequence you edited in the timeline to be used by another editing programme like Davinci Resolve. Of course, your secondary editing programme won't have all the effects of the one you used, so some of them will have to be rendered. If you are going to render effects and you can't afford uncompressed lossless, make sure you choose something with a resonably high quality like DNXHR 4:4:4 12bit. I like it 'cause it has a very high bitrate, it's 4:4:4 and 12bit should be enough to avoid any sort of banding. If you're going to use different mezzanine files in your workflow, please please please make sure that they're lossless but if you have to go lossy, it's mandatory that you use the same codecs or very similar codecs (i.e codecs that use the very same transform and have similar parameters). This is because different codecs with different transforms cut off different kind of frequencies, so what you end up with has the worse of one and the other.
I wonder what else in the 21st century. What is DNG format? This format has not been developed by Adobe for 10 years. It is not up to 10bit HDR.
Canon cameras. I don't know about it today, but in the EU's third world country, the slogan on the silk road was: "Go tohacker so that you can take lossless RAW photos. Why are there no compressed images? Where is the format eg JPEGXL.
Second thing. I bought Sony HEVC TV this year. I am asking how to maximize the use of HEVC 4K Canal+ technology in Poland? Oh dear, there is no such possibility at present. The decoder doesn't provide for this and you have signed new contract for two years. This is your problem. Where does the CAM Module have TV for few hundred EURO? There is no adapter for the card. Also, you can't record TV movies.
Quote:
Originally Posted by FranceBB View Post
Do you see something weird? Of course you do! The tonality and the brightness of the sky is completely different from the one you would see with your naked eye. It would be very very bright and almost unwatchable with your naked eye, but here is definitely watchable and almost relaxing to watch 'cause it has been brought down to 100 nits in order to be displayed on SDR monitors, so this is the compromise you have to make and it's something that it's always done in movies.
For instance, this is what you would get if the very same scene is shot in BT709 SDR 100 nits directly (left) or in Log 1000 (or more) nits and then graded to 100 nits:
For amateurs and unemployed people who buy Sony camcorders for shooting weddings. If you buy a mid-range camera, on a cloudy day the clouds will be purple, even in HDR .. The LED lighting in the church will also be purple. Who then watches it and at what price?

Last edited by Jamaika; 10th April 2022 at 12:34.
Jamaika is offline   Reply With Quote
Old 11th April 2022, 10:48   #14  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Quote:
Originally Posted by Jamaika View Post
What is DNG format?
A lossless 12bit format in which each frame is stored as a single .dng picture, losslessly, while audio is stored separately as mono PCM 24bit little endian .wav and then you have an .xml that links 'em all.

Quote:
Originally Posted by Jamaika View Post
This format has not been developed by Adobe for 10 years.
Not true, version 1.5 came out in May 2019.

Quote:
Originally Posted by Jamaika View Post
It is not up to 10bit HDR.
It's lossless 12bit Clog3 HDR, what else do you want?
Clog3 collects as much light as the Canon camera sensor can collect and saves it losslessly. In post-production, Clog3 will become either HLG or PQ and it can also become BT709 SDR. Given that Clog3 is a totally logarithmic curve just like PQ, it has nothing to envy to such a curve.




Quote:
Originally Posted by Jamaika View Post
Canon cameras. I don't know about it today, but in the EU's third world country, the slogan on the silk road was: "Go tohacker so that you can take lossless RAW photos. Why are there no compressed images? Where is the format eg JPEGXL.
JPEGXL supports both lossy and lossless, true, but how many cameras support it? A fraction at the very best.


Quote:
Originally Posted by Jamaika View Post
Second thing. I bought Sony HEVC TV this year. I am asking how to maximize the use of HEVC 4K Canal+ technology in Poland? Oh dear, there is no such possibility at present. The decoder doesn't provide for this and you have signed new contract for two years. This is your problem. Where does the CAM Module have TV for few hundred EURO? There is no adapter for the card. Also, you can't record TV movies.
I feel sorry for you. In countries in which Sky / Comcast is available, we have UHD H.265 BT2020 HDR HLG 50p linear channels at 25 Mbit/s and we also have UHD H.265 contents for VOD. You can watch anything there, from Sports to TV Series and Movies. And while in Italy we're stuck with AC3 5.1, I think in the UK they also have E-AC3 with Atmos.


Quote:
Originally Posted by Jamaika View Post
For amateurs and unemployed people who buy Sony camcorders for shooting weddings. If you buy a mid-range camera, on a cloudy day the clouds will be purple, even in HDR .. The LED lighting in the church will also be purple. Who then watches it and at what price?
I don't wanna offend anyone, but I'm not really interested in the consumer market (I mean, I am, but not in this topic XD). Keep in mind that I've seen footage from Arri Alexa cameras and it wasn't purple, same goes for Red Monstro cameras, but again I know that a consumer won't be able to afford a 60k+ camera. Even something like the old Canon C300 Mark II mentioned above might be too much for a consumer at 5k and I understand that...
But again, camera prices will go down in the future even for the consumer market, it's just a matter of time I think...

I think the Blackmagic Cinema Pocket 6K would be a good compromise for a consumer that is interested in shooting good videos, but keep in mind that this is only 'cause we're on Doom9 and people who often do this for their job wanna have the same quality for their family recordings. I know one of my colleagues called JR who is struggling the same way you are: he wants to have the same quality. The overwhelming majority of people watches crap and shoot crap with their smartphones, it's just us who find that unacceptable 'cause our eyes are trained to see all those not-so-little issues.

Last edited by FranceBB; 11th April 2022 at 10:51.
FranceBB is offline   Reply With Quote
Old 11th April 2022, 11:28   #15  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 702
Quote:
Originally Posted by Jamaika View Post
For amateurs and unemployed people who buy Sony camcorders for shooting weddings. If you buy a mid-range camera, on a cloudy day the clouds will be purple, even in HDR .. The LED lighting in the church will also be purple. Who then watches it and at what price?
In 2014 I bought a Sony A6300 for 1200 EURO in Sweden and after I saw how awful purple/magenta the blue hues were (in both slog2 and slog3 although since it records in 8-bit the slog2 is only usable) but I just made my own preset inside the camera. You got CMYRGB to adjust and plenty of colorspaces. After adjustment it is the best buy I ever did, I have no problem anymore after a bit of adjustment even when high ISO in low light AviSynth is a great tool to denoise if needed. Today I see this camera sold here in my country for 400 euro. 6k downsized internally to UHD and it has great detail without piercing halos or oversharpening if sharpening is turned off.
anton_foy is offline   Reply With Quote
Old 11th April 2022, 15:24   #16  |  Link
wswartzendruber
hlg-tools Maintainer
 
wswartzendruber's Avatar
 
Join Date: Feb 2008
Posts: 412
The JPEG people haven't even finished the various JPEG XL pieces yet. The spec is there, but the reference software and conformance testing vectors are still in play.
wswartzendruber is offline   Reply With Quote
Old 11th April 2022, 18:15   #17  |  Link
Jamaika
Registered User
 
Join Date: Jul 2015
Posts: 697
Quote:
Originally Posted by FranceBB View Post
A lossless 12bit format in which each frame is stored as a single .dng picture, losslessly, while audio is stored separately as mono PCM 24bit little endian .wav and then you have an .xml that links 'em all.



Not true, version 1.5 came out in May 2019.



It's lossless 12bit Clog3 HDR, what else do you want?
Clog3 collects as much light as the Canon camera sensor can collect and saves it losslessly. In post-production, Clog3 will become either HLG or PQ and it can also become BT709 SDR. Given that Clog3 is a totally logarithmic curve just like PQ, it has nothing to envy to such a curve.






JPEGXL supports both lossy and lossless, true, but how many cameras support it? A fraction at the very best.




I feel sorry for you. In countries in which Sky / Comcast is available, we have UHD H.265 BT2020 HDR HLG 50p linear channels at 25 Mbit/s and we also have UHD H.265 contents for VOD. You can watch anything there, from Sports to TV Series and Movies. And while in Italy we're stuck with AC3 5.1, I think in the UK they also have E-AC3 with Atmos.




I don't wanna offend anyone, but I'm not really interested in the consumer market (I mean, I am, but not in this topic XD). Keep in mind that I've seen footage from Arri Alexa cameras and it wasn't purple, same goes for Red Monstro cameras, but again I know that a consumer won't be able to afford a 60k+ camera. Even something like the old Canon C300 Mark II mentioned above might be too much for a consumer at 5k and I understand that...
But again, camera prices will go down in the future even for the consumer market, it's just a matter of time I think...

I think the Blackmagic Cinema Pocket 6K would be a good compromise for a consumer that is interested in shooting good videos, but keep in mind that this is only 'cause we're on Doom9 and people who often do this for their job wanna have the same quality for their family recordings. I know one of my colleagues called JR who is struggling the same way you are: he wants to have the same quality. The overwhelming majority of people watches crap and shoot crap with their smartphones, it's just us who find that unacceptable 'cause our eyes are trained to see all those not-so-little issues.
Colormatrix options in old free adobe dng sdk 1.5 software.
"-cs1 Color space: "sRGB" (default)\n"
"-cs2 Color space: "Adobe RGB"\n"
"-cs3 Color space: "ProPhoto RGB"\n"
"-cs4 Color space: "ColorMatch RGB"\n"
"-cs5 Color space: "Gray Gamma 1.8"\n"
"-cs6 Color space: "Gray Gamma 2.2"\n"
Where is the colormatrix raw default unknown option as in libraw? For archiving raw.
Where are the bt2020, S-gamma3 / S-log3 options?
What kind of compression is a lossy/lossless codec based on libtiff in 2022? Most of them are obsolete but libtiff tries to change it at the lowest cost.
Why does dng have lossy data compression with just old jpeg bt601 codec?
Code:
#ifdef LZW_SUPPORT
" -c lzw[:opts]   compress output with Lempel-Ziv & Welch encoding\n"
/* "    LZW options:" */
"    #            set predictor value\n"
"    For example, -c lzw:2 for LZW-encoded data with horizontal differencing\n"
#endif
#ifdef ZIP_SUPPORT
" -c zip[:opts]   compress output with deflate encoding\n"
/* "    Deflate (ZIP) options:", */
"    #            set predictor value\n"
"    p#           set compression level (preset)\n"
"    For example, -c zip:3:p9 for maximum compression level and floating\n"
"                 point predictor.\n"
#endif
#if defined(ZIP_SUPPORT) && defined(LIBDEFLATE_SUPPORT)
"    s#           set subcodec: 0=zlib, 1=libdeflate (default 1)\n"
/* "                 (only for Deflate/ZIP)", */
#endif
#ifdef LERC_SUPPORT
" -c lerc[:opts]  compress output with LERC encoding\n"
/* "    LERC options:", */
"    #            set max_z_error value\n"
"    p#           set compression level (preset)\n"
  #ifdef ZSTD_SUPPORT
    "    s#           set subcodec: 0=none, 1=deflate, 2=zstd (default 0)\n"
    "    For example, -c lerc:0.5:s2:p22 for max_z_error 0.5,\n"
    "    zstd additional compression with maximum compression level.\n"
  #else
    "    s#           set subcodec: 0=none, 1=deflate (default 0)\n"
    "    For example, -c lerc:0.5:s1:p12 for max_z_error 0.5,\n"
    "    deflate additional compression with maximum compression level.\n"
  #endif
#endif
#ifdef LZMA_SUPPORT
" -c lzma[:opts]  compress output with LZMA2 encoding\n"
/* "    LZMA options:", */
"    #            set predictor value\n"
"    p#           set compression level (preset)\n"
#endif
#ifdef ZSTD_SUPPORT
" -c zstd[:opts]  compress output with ZSTD encoding\n"
/* "    ZSTD options:", */
"    #            set predictor value\n"
"    p#           set compression level (preset)\n"
#endif
#ifdef WEBP_SUPPORT
" -c webp[:opts]  compress output with WEBP encoding\n"
/* "    WEBP options:", */
"    #            set predictor value\n"
"    p#           set compression level (preset)\n"
#endif
#ifdef JPEG_SUPPORT
" -c jpeg[:opts]  compress output with JPEG encoding\n"
/* "    JPEG options:", */
"    #            set compression quality level (0-100, default 75)\n"
"    r            output color image as RGB rather than YCbCr\n"
"    For example, -c jpeg:r:50 for JPEG-encoded RGB with 50% comp. quality\n"
#endif
#ifdef JBIG_SUPPORT
" -c jbig         compress output with ISO JBIG encoding\n"
#endif
#ifdef PACKBITS_SUPPORT
" -c packbits     compress output with packbits encoding\n"
#endif
#ifdef CCITT_SUPPORT
" -c g3[:opts]    compress output with CCITT Group 3 encoding\n"
/* "    CCITT Group 3 options:", */
"    1d           use default CCITT Group 3 1D-encoding\n"
"    2d           use optional CCITT Group 3 2D-encoding\n"
"    fill         byte-align EOL codes\n"
"    For example, -c g3:2d:fill for G3-2D-encoded data with byte-aligned EOLs\n"
" -c g4           compress output with CCITT Group 4 encoding\n"
#endif
#ifdef LOGLUV_SUPPORT
" -c sgilog       compress output with SGILOG encoding\n"
#endif
#define COMPRESSION_JXL 50002 /* JPEGXL: WARNING not registered in Adobe-maintained registry */

As for the dng sdk patches, I asked the libraw creator what his opinion about the new dng 1.5 version?

The next version is actually two or three new fixes added. It is recommended to use version 1.3.
Answer: "He would have liked to have thrown into the trash, if not for Adobe. Lots of bugs. The fact that you managed to compile for gcc is a miracle. You will see how badly the colors are processed. This software isn't even in C++11."
I don't know what version of DNG is in the latest Photoshop 2022.

I believe that the use of JpegXL, JpegXS codecs is a matter of patents.

I don't know what the E-AC3 with Atmos is. I think asking such a question in Orange, Canal +, HBO would be pointless. I bought the newest TV set and we are already talking about AC4 and VVC / EVC. There is nothing like Android and advertising for uploading new codecs. I suppose it is already on the market in Japan. The meaning of the development of the Heif codec has essentially ceased to exist.

It has a strange feeling that the forum is not hosted by the worlds of Hollywood cinema. I read about X cameras for a little money and it turned out that the editing of the film and the end result is not that easy.

Personally, I can understand that Tuscany has a beautiful sky and northern people can dream about it.

http://strzelczyk.edu.pl/zawartosc/g...871/8540_w.jpg

Last edited by Jamaika; 11th April 2022 at 21:06.
Jamaika is offline   Reply With Quote
Old 11th April 2022, 20:13   #18  |  Link
wswartzendruber
hlg-tools Maintainer
 
wswartzendruber's Avatar
 
Join Date: Feb 2008
Posts: 412
I'm surprised to already see JPEG XL out in the wild.
wswartzendruber is offline   Reply With Quote
Old 20th June 2022, 23:34   #19  |  Link
Balling
Registered User
 
Join Date: Feb 2020
Posts: 539
"so the stream is gonna be flagged as RGB even though it's XYZ."

XYZ and RGB both use Identity matrix. What?
Balling is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:06.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.