Light amplification visor | Doom Wiki - Fandom - doom light
LED spotlights Ceiling
We never got any 'interesting' stuff. I suspect people would prefer a bit more anonymity than you would get from a 2-3 person shop where the person who printed your stuff might also be the one ringing you up for it.
As you suggest storage in linear 16-bit float is standard, the procedure for calibrating cameras to produce the SMPTE-specified colourspace is standard, the output transforms for various display types are standards, files have metadata to avoid double-transforming etc etc. It is complex but gives you a lot more confidence than idly wondering how the RGB triplets in a given JPG relate to the light that actually entered the camera in the first place...
As far as the camera is concerned, it's a big advantage to have an electronic shutter. The effects of camera shake are magnified with macro photography, and a mechanical shutter can make the results observably softer. I am cheap, so I use an old DSLR in T mode and use a Raspberry Pi to turn on one of those backlit sketch pads for a fraction of a second to expose the image.
This takes me back. I worked in a one-hour photo place way back in the day, operating a Noritsu. We had a film school in town and students would often come in with their C-41 or their Tri-X and complain about the colors or saturation of their prints. Which was totally fair, because tapping the right CMYK buttons on the machine was more art than science. Ah, memories.
That’s not to say that you can’t get accurate colors using negatives. It’s a physical process that records color, and you can make color profiles for negatives.> For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.What you would do is store a color profile in the image.You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
> They sound a bit awkward to use from what I've read, as I think you need to use liquid to adhere the film to the drum correctly?Not strictly necessary, but strongly recommended. You can also use wet mounting for your flatbed scanner. There are conversion kits so you can use wet mounting with an Epson or Canon flatbed.Wet mounting solves or reduces a lot of problems, like Newton rings / keeping the negative flat and in focus, dust, scratches, water marks.
A setup like that helped me get through 15k prints in no time with excellent results. The biggest barrier to success was after churning through the 7x5 and 6x4 shots, things got a lot harder with variable sizes of print. It really slowed the process down — and conversely, uniform print sizes made the first 90% of the job almost enjoyable. I averaged one “scan” every 2s.
There's also some proper academic research into this subject going on currently: https://www.researchgate.net/publication/352553983_A_multisp...One thing that's important to note about this process is that the idea is not to _image_ the film, but rather to measure the density of each film layer and reconstruct the color image from that information. This is a critical realization, because one of the most important things to know about color negative film is that the "color" information in the negative actually only exists relative to the RA-4 printing system. Negatives themselves don't have an inherent color space.Cool to see someone else working on this though. I actually considered those drivers for my build, but I ended up building a very high frequency, high resolution PWM (30khz/10bit) dimming solution with TI LM3409 drivers. It's very hard to get uniform light as well so I ended up getting some custom single chip RGB LEDs.https://i.imgur.com/BVM9p6Q.jpeghttps://i.imgur.com/5oozHnN.jpegI've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
I've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
2) White balance on any patch.3) Sample the color balance of every other patch. They should have equal amounts of all colors.
After switching back and fourth and really looking closely at each one I ended up deciding that I liked the bottom right photo, even though I could recognize the top right one had a more classic film look. For me it was just because there was more detail in the colors. The original scan was kind of washed out in the blues I guess, as well as being a little more red in the dirt area.
> R9 value, TCS 09, or in other words, the red color is the key color for many lighting applications, such as film and video lighting, textile printing, image printing, skin tone, medical lighting, and so on. Besides, many other objects which are not in red color, but actually consists of different colors including red color. For instance, the skin tone is impacted by the blood under the skin, which means that the skin tone also includes red color, although it looks much like close to white or light yellow. So, if the R9 value is not good enough, the skin tone under this light will be more paleness or even greenish in your eyes or cameras.[25]* https://en.wikipedia.org/wiki/Color_rendering_index#Special_...
https://www.youtube.com/watch?v=Qxt2HUz3Sv4If I could fix everything, I'd make all image processing something like 64 bit linear RGB and keep the colorspace internal to the storage format and display, like a black box and not relevant to the user. So for example, no more HDR, and we'd always work with RGB in iOS instead of sRGB.Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
Basically, it's a collimator: it takes light going in all directions (eg from a lamp), and turns it into light all going in one direction.What does it look like to look through? Do objects appear brighter? I suppose they appear brighter but also smeared out?
The bigger factor at play here in my mind, is the availability of robust and consistent color developing services. Most indie labs these days are using C41 kits and at best a Jobo machine. There are very few labs even offering Dip and Dunk with a proper replenishment cycle with chemistry from the big players like Fujihunt or Kodak Flexicolor.A a half a degree off temp, or a developer that near its rated capacity is enough to megafuck the resulting negatives.There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
Waterproof OutdoorLEDspotlights
The next time I touched a photofinishing machine in the early 2000s you looked at a screen to make adjustments and we offered digital services like scanning and printing from digital files. I still used my negative reading skills to talk to customers when we were troubleshooting results. Putting the negative on the light table to show them how thin they were or how wildly the color changed when you switched what kind of light the picture was shot in was the quickest way to resolve quality complaints.
Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
Back in the 1-Hour-Photo Minilab days, the tech was doing more or the less the same thing as well, or just hitting 'auto' and the Noritsu or Frontier was making adjustments to each frame before printing it.If i am scanning the negatives with a camera and light source and after inverting, a greenish mask is still present, as like in the first conversion example they give, a few tweaks of a few sliders in photo editing software is enough to correct it.The bigger factor at play here in my mind, is the availability of robust and consistent color developing services. Most indie labs these days are using C41 kits and at best a Jobo machine. There are very few labs even offering Dip and Dunk with a proper replenishment cycle with chemistry from the big players like Fujihunt or Kodak Flexicolor.A a half a degree off temp, or a developer that near its rated capacity is enough to megafuck the resulting negatives.There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
• You can modify a sensor for IR, though this is often a costly and difficult modification. But even if you do so, the IR focal distance is different from the visible light focal distance. So for every shot you need to refocus for IR, but also ensure that the refocussed IR image is exactly the same size as the visible image.• You can use another sensor that is sensitive to IR, but its probably not going to have the same resolution, you're going to struggle to somehow have both cameras see the target image, and then once you get both exposures, alignment becomes a problem.So yeah, doable but non-trivial.
Doing it that way should permit both outputs by changing the curves used in colour grading, and I suspect the real issue is just "inverting colours" isn't the most appealing visually, just as most professional photos are colour graded to some extent because the raw images don't look as appealing.
There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
• You can use another sensor that is sensitive to IR, but its probably not going to have the same resolution, you're going to struggle to somehow have both cameras see the target image, and then once you get both exposures, alignment becomes a problem.So yeah, doable but non-trivial.
Led spot lightindoor
[…]> R9 value, TCS 09, or in other words, the red color is the key color for many lighting applications, such as film and video lighting, textile printing, image printing, skin tone, medical lighting, and so on. Besides, many other objects which are not in red color, but actually consists of different colors including red color. For instance, the skin tone is impacted by the blood under the skin, which means that the skin tone also includes red color, although it looks much like close to white or light yellow. So, if the R9 value is not good enough, the skin tone under this light will be more paleness or even greenish in your eyes or cameras.[25]* https://en.wikipedia.org/wiki/Color_rendering_index#Special_...
I think this is a major point. I applaud the effort of the post and would (as a Mamiya 7 shooter!) love a whole unit better than the Epson V600, but correcting a color cast in the film scan is trivially easy in an photo editing tools these days. I scan and get tifs and can tweak to whatever. More important are the iris/optics of the scanner itself and how flat the film is inside the bed.
Arguably though, the correct solution is to preserve the source information as much as possible, so similar to what it proposed - scan the images using light sources that correspond to the peaks of the chemicals used in the negative, and then colour grade directly from that using a modified inverted curve.Doing it that way should permit both outputs by changing the curves used in colour grading, and I suspect the real issue is just "inverting colours" isn't the most appealing visually, just as most professional photos are colour graded to some extent because the raw images don't look as appealing.
The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
C41 is such a toilet process anyway — everything is shades of brown?! — that I can’t imagine anyway would look for precise color work from it the same way I can’t imagine anyone would look for resolution for 135 stock.
The quest for the most true colors from C-41 feels like a pointless exercise in ways. When i print RA-4 in the darkroom i am working with a set of color correction filters and spinning dials to mix color on my enlarger head. The resulting print is my interpretation of the negative.Back in the 1-Hour-Photo Minilab days, the tech was doing more or the less the same thing as well, or just hitting 'auto' and the Noritsu or Frontier was making adjustments to each frame before printing it.If i am scanning the negatives with a camera and light source and after inverting, a greenish mask is still present, as like in the first conversion example they give, a few tweaks of a few sliders in photo editing software is enough to correct it.The bigger factor at play here in my mind, is the availability of robust and consistent color developing services. Most indie labs these days are using C41 kits and at best a Jobo machine. There are very few labs even offering Dip and Dunk with a proper replenishment cycle with chemistry from the big players like Fujihunt or Kodak Flexicolor.A a half a degree off temp, or a developer that near its rated capacity is enough to megafuck the resulting negatives.There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
A a half a degree off temp, or a developer that near its rated capacity is enough to megafuck the resulting negatives.There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
Not strictly necessary, but strongly recommended. You can also use wet mounting for your flatbed scanner. There are conversion kits so you can use wet mounting with an Epson or Canon flatbed.Wet mounting solves or reduces a lot of problems, like Newton rings / keeping the negative flat and in focus, dust, scratches, water marks.
LED spot LightBulb
Don’t overthink. Light knows only of wavelengths. Our brain is where colors exist. Everything here is subjective, trying to approach what human eyes would perceive from the original subject, or not - photography is an art, and only sometimes the goal is to accurately represent what’s in front of the camera and, very often, it’s the opposite.When scanning originals, recording the originals in the most accurate way possible is desirable and, for that, I’d suggest using multiple (as many as needed to capture the response curves of the pigments) narrow bandwidth emitters and sensors tuned to those wavelengths. From there you should be able to reconstruct what a human eye would have seen through the lenses, but, again, what we see is nothing but what our brains make out of the light hitting our retinas. There will never be something that’s perfectly accurate.
https://www.silverfast.com/products-overview-products-compan...BUT.. here's the rub: if your film is old, it has probably faded. So whatever you scan is going to be "wrong" compared to what it looked like the day it was taken. The only way to easily fix that is to try and find the white point and black point in the scan and recalibrate all your channels that way. Then you're really just down to eyeballing it, IMO.
LED Spot Light12V
What does it look like to look through? Do objects appear brighter? I suppose they appear brighter but also smeared out?
My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
You can get there if you have an accurate color profile for your camera and an accurate color profile for your monitor.> So for this article, I don't see mathematical proof that the negatives have been inverted accurately, regardless of method, even though I'm sure the results are great. I suspect it comes down to subjective impression.People who work with negatives generally just don’t give a shit about “accurate”. If you care about accurate colors, then maybe you would be shooting color positive film instead, or digital. It is generally accepted that a part of the process of shooting negatives is to make subjective decisions about color, after you develop the film.That’s not to say that you can’t get accurate colors using negatives. It’s a physical process that records color, and you can make color profiles for negatives.> For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.What you would do is store a color profile in the image.You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
My feeling is most people who are going to be interested in the slight increase in color accuracy are already drum scanning or using a virtual drum scanner like a Imacon flextight, and the team at Imacon has some crazy color scientists working on that as evidenced by the images it outputs.The quest for the most true colors from C-41 feels like a pointless exercise in ways. When i print RA-4 in the darkroom i am working with a set of color correction filters and spinning dials to mix color on my enlarger head. The resulting print is my interpretation of the negative.Back in the 1-Hour-Photo Minilab days, the tech was doing more or the less the same thing as well, or just hitting 'auto' and the Noritsu or Frontier was making adjustments to each frame before printing it.If i am scanning the negatives with a camera and light source and after inverting, a greenish mask is still present, as like in the first conversion example they give, a few tweaks of a few sliders in photo editing software is enough to correct it.The bigger factor at play here in my mind, is the availability of robust and consistent color developing services. Most indie labs these days are using C41 kits and at best a Jobo machine. There are very few labs even offering Dip and Dunk with a proper replenishment cycle with chemistry from the big players like Fujihunt or Kodak Flexicolor.A a half a degree off temp, or a developer that near its rated capacity is enough to megafuck the resulting negatives.There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
https://medium.com/@alexi.maschas/color-negative-film-color-...There's also some proper academic research into this subject going on currently: https://www.researchgate.net/publication/352553983_A_multisp...One thing that's important to note about this process is that the idea is not to _image_ the film, but rather to measure the density of each film layer and reconstruct the color image from that information. This is a critical realization, because one of the most important things to know about color negative film is that the "color" information in the negative actually only exists relative to the RA-4 printing system. Negatives themselves don't have an inherent color space.Cool to see someone else working on this though. I actually considered those drivers for my build, but I ended up building a very high frequency, high resolution PWM (30khz/10bit) dimming solution with TI LM3409 drivers. It's very hard to get uniform light as well so I ended up getting some custom single chip RGB LEDs.https://i.imgur.com/BVM9p6Q.jpeghttps://i.imgur.com/5oozHnN.jpegI've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
Cool to see someone else working on this though. I actually considered those drivers for my build, but I ended up building a very high frequency, high resolution PWM (30khz/10bit) dimming solution with TI LM3409 drivers. It's very hard to get uniform light as well so I ended up getting some custom single chip RGB LEDs.https://i.imgur.com/BVM9p6Q.jpeghttps://i.imgur.com/5oozHnN.jpegI've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
If I could fix everything, I'd make all image processing something like 64 bit linear RGB and keep the colorspace internal to the storage format and display, like a black box and not relevant to the user. So for example, no more HDR, and we'd always work with RGB in iOS instead of sRGB.Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
LED Spot Lightfor truck
> For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.What you would do is store a color profile in the image.You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
Perhaps the author could explain why they find one image superior instead of just putting two images side-by-side, with the implied message that "any idiot can see that is better".
https://i.imgur.com/BVM9p6Q.jpeghttps://i.imgur.com/5oozHnN.jpegI've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
> Ra is the average value of R1–R8; other values from R9 to R15 are not used in the calculation of Ra, including R9 "saturated red", R13 "skin color (light)", and R15 "skin color (medium)", which are all difficult colors to faithfully reproduce. R9 is a vital index in high-CRI lighting, as many applications require red lights, such as film and video lighting, medical lighting, art lighting, etc. However, in the general CRI (Ra) calculation R9 is not included.[…]> R9 value, TCS 09, or in other words, the red color is the key color for many lighting applications, such as film and video lighting, textile printing, image printing, skin tone, medical lighting, and so on. Besides, many other objects which are not in red color, but actually consists of different colors including red color. For instance, the skin tone is impacted by the blood under the skin, which means that the skin tone also includes red color, although it looks much like close to white or light yellow. So, if the R9 value is not good enough, the skin tone under this light will be more paleness or even greenish in your eyes or cameras.[25]* https://en.wikipedia.org/wiki/Color_rendering_index#Special_...
They sound a bit awkward to use from what I've read, as I think you need to use liquid to adhere the film to the drum correctly?
So the process would be, using the RAW scan of the image (the orange mask intact):1) Invert the image.2) White balance on any patch.3) Sample the color balance of every other patch. They should have equal amounts of all colors.
If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
What you would do is store a color profile in the image.You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
Maybe this is prejudiced because this is how I remember old photos to be... But then, isn't that the point of scanning old negatives anyway - to recreate what the old images on them would have looked like at the time?Arguably though, the correct solution is to preserve the source information as much as possible, so similar to what it proposed - scan the images using light sources that correspond to the peaks of the chemicals used in the negative, and then colour grade directly from that using a modified inverted curve.Doing it that way should permit both outputs by changing the curves used in colour grading, and I suspect the real issue is just "inverting colours" isn't the most appealing visually, just as most professional photos are colour graded to some extent because the raw images don't look as appealing.
You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
Wet mounting solves or reduces a lot of problems, like Newton rings / keeping the negative flat and in focus, dust, scratches, water marks.
I haven’t done this, but when I had images drum scanned, I provided a reference for how the colors were supposed to look and the technician matched the reference. My reference was just a flatbed scan of the same negative, which I had color corrected myself.
I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
I get exactly that green cast and muted color range off of my flatbed scans (Epson v800). This is a really intriguing path to fixing them I hadn't considered.It seems like the writeup here doesn't specify what you're using for the actual imaging? A flatbed scanner? A camera?
Led spot lightoutdoor
or by this https://www.filmlight.ltd.uk/products/northlight/overview_nl... a non realtime scanner with "perfect" registration. Again I can't remember the light source, but I suspect its probably an arc gap like large projectors. I do know that it has a massive cooling chamber to make sure it doesn't heat the film though. That scanner is a non-realtime CCD slit scanner.
> So for this article, I don't see mathematical proof that the negatives have been inverted accurately, regardless of method, even though I'm sure the results are great. I suspect it comes down to subjective impression.People who work with negatives generally just don’t give a shit about “accurate”. If you care about accurate colors, then maybe you would be shooting color positive film instead, or digital. It is generally accepted that a part of the process of shooting negatives is to make subjective decisions about color, after you develop the film.That’s not to say that you can’t get accurate colors using negatives. It’s a physical process that records color, and you can make color profiles for negatives.> For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.What you would do is store a color profile in the image.You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
1) Invert the image.2) White balance on any patch.3) Sample the color balance of every other patch. They should have equal amounts of all colors.
Led spot lightamazon
One thing that's important to note about this process is that the idea is not to _image_ the film, but rather to measure the density of each film layer and reconstruct the color image from that information. This is a critical realization, because one of the most important things to know about color negative film is that the "color" information in the negative actually only exists relative to the RA-4 printing system. Negatives themselves don't have an inherent color space.Cool to see someone else working on this though. I actually considered those drivers for my build, but I ended up building a very high frequency, high resolution PWM (30khz/10bit) dimming solution with TI LM3409 drivers. It's very hard to get uniform light as well so I ended up getting some custom single chip RGB LEDs.https://i.imgur.com/BVM9p6Q.jpeghttps://i.imgur.com/5oozHnN.jpegI've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
So for this article, I don't see mathematical proof that the negatives have been inverted accurately, regardless of method, even though I'm sure the results are great. I suspect it comes down to subjective impression.Here's a video I found discussing monitor calibration:https://www.youtube.com/watch?v=Qxt2HUz3Sv4If I could fix everything, I'd make all image processing something like 64 bit linear RGB and keep the colorspace internal to the storage format and display, like a black box and not relevant to the user. So for example, no more HDR, and we'd always work with RGB in iOS instead of sRGB.Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
One thing I did note is that the second colour image appears to have nowhere near the aliasing or film noise of the first sample. Was its scanned at different settings?
Maybe the bottom one is a more realistic reproduction of the scene, but I also prefer the top one, which is more saturated and closer to what I associate as a film image.Each kind of film has its own character and color variations; it’s silly to try to neutralize everything.
https://i.imgur.com/5oozHnN.jpegI've been working on this for a few years, and what I will say is that there's actually another level of complexity beyond just implementing the light. There's a lot of testing to ensure that you're getting proper linearization of each channel, and there's still a color crosstalk problem arising from the misalignment between the color sensitivity of most modern digital cameras and the bands that are used to scan color negatives. It requires some additional tweaking to get all of the color information in the correct channel. You can also very easily end up saturating a channel without realizing it as well. Oversaturated reds are a common occurrence in RGB scanning.I'd also note that the wavelengths you should shoot for are more along the lines of 440nm 535nm 660nm, which correspond to the Status M densitometry standard. This standard was designed specifically for color negative film.
Both seem to work well. The bluish thing works quite well, but it turns out that different rolls need slightly different light color to compensate, so it wasn't worth the trouble. In the end the best result was buying a license for Negative Lab Pro[0] to post process everything[0]: https://www.negativelabpro.com/
It turns out that you don’t care. Maybe you can think of brown as a color that filters out blue light. You can counteract it by shining more blue light through it. Maybe not exactly blue, but some light mixture. In the end it doesn’t matter, except when you look at the negative with your eyes.
While high-CRI is better than low(er)-CRI, one criticism is that the 'score' is somewhat lacking in it measure an important component:> Ra is the average value of R1–R8; other values from R9 to R15 are not used in the calculation of Ra, including R9 "saturated red", R13 "skin color (light)", and R15 "skin color (medium)", which are all difficult colors to faithfully reproduce. R9 is a vital index in high-CRI lighting, as many applications require red lights, such as film and video lighting, medical lighting, art lighting, etc. However, in the general CRI (Ra) calculation R9 is not included.[…]> R9 value, TCS 09, or in other words, the red color is the key color for many lighting applications, such as film and video lighting, textile printing, image printing, skin tone, medical lighting, and so on. Besides, many other objects which are not in red color, but actually consists of different colors including red color. For instance, the skin tone is impacted by the blood under the skin, which means that the skin tone also includes red color, although it looks much like close to white or light yellow. So, if the R9 value is not good enough, the skin tone under this light will be more paleness or even greenish in your eyes or cameras.[25]* https://en.wikipedia.org/wiki/Color_rendering_index#Special_...
In the old days, you might have been able to use a florescent 5600k light sources, as rated ones have a known spectrum that can be counted on. Having those in a light table would get you 90% of the way to a decent scan.One thing I did note is that the second colour image appears to have nowhere near the aliasing or film noise of the first sample. Was its scanned at different settings?
When scanning originals, recording the originals in the most accurate way possible is desirable and, for that, I’d suggest using multiple (as many as needed to capture the response curves of the pigments) narrow bandwidth emitters and sensors tuned to those wavelengths. From there you should be able to reconstruct what a human eye would have seen through the lenses, but, again, what we see is nothing but what our brains make out of the light hitting our retinas. There will never be something that’s perfectly accurate.
https://newhavendisplay.com/blog/brightness-enhancement-film...Basically, it's a collimator: it takes light going in all directions (eg from a lamp), and turns it into light all going in one direction.What does it look like to look through? Do objects appear brighter? I suppose they appear brighter but also smeared out?
If i am scanning the negatives with a camera and light source and after inverting, a greenish mask is still present, as like in the first conversion example they give, a few tweaks of a few sliders in photo editing software is enough to correct it.The bigger factor at play here in my mind, is the availability of robust and consistent color developing services. Most indie labs these days are using C41 kits and at best a Jobo machine. There are very few labs even offering Dip and Dunk with a proper replenishment cycle with chemistry from the big players like Fujihunt or Kodak Flexicolor.A a half a degree off temp, or a developer that near its rated capacity is enough to megafuck the resulting negatives.There is an even worse trend of indie chemistry manufactures offering C41 kits with seemingly innocent replacements, that have huge consequences. For example one indie manufacturer in Canada is shipping there kits without a proper Color Developer (CD4) and instead using p-Phenylenediamine, which guarantees the incorrect formation of dyesSorry if i sound negative and got on a rant, i really do love this sort of research.
For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
Here's a video I found discussing monitor calibration:https://www.youtube.com/watch?v=Qxt2HUz3Sv4If I could fix everything, I'd make all image processing something like 64 bit linear RGB and keep the colorspace internal to the storage format and display, like a black box and not relevant to the user. So for example, no more HDR, and we'd always work with RGB in iOS instead of sRGB.Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
People who work with negatives generally just don’t give a shit about “accurate”. If you care about accurate colors, then maybe you would be shooting color positive film instead, or digital. It is generally accepted that a part of the process of shooting negatives is to make subjective decisions about color, after you develop the film.That’s not to say that you can’t get accurate colors using negatives. It’s a physical process that records color, and you can make color profiles for negatives.> For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.What you would do is store a color profile in the image.You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
BUT.. here's the rub: if your film is old, it has probably faded. So whatever you scan is going to be "wrong" compared to what it looked like the day it was taken. The only way to easily fix that is to try and find the white point and black point in the scan and recalibrate all your channels that way. Then you're really just down to eyeballing it, IMO.