Announcement

Collapse
No announcement yet.

Color correction and 3D LUT cube

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Color correction and 3D LUT cube

    Edit:
    WARNING: In this V1 table there is mistake in DngMatrix (cmat). It will still produce correct image, but not proper (temperature,tint). [2020/09/05].
    Fitted Amba Color Calibration Matrices (V1): https://www.goprawn.com/forum/ambare...8654#post18654



    I think I found where the most of color calibration, or correction, in
    camera is done. It is using 3D LUT cube.

    Sometimes when dumping RAW files ("t img -ituner save c:\ituner.txt"
    or "t app test enc stillrawcap 0 1" if those are working) there are
    funny "ituner_CC_3d.bin" file. Size is 17536.
    Also unpacking FW (with Amba Extractor 4071) one gets files StillCc3
    or VideoCc2, again 17536 bytes.

    Such files contain 3D LUT. In 17536 byte files at offset 0x80 there is a
    cube of 16x16x16 uint32 integers. Each integer packs RGB:
    B=lowest 10bits, G=10..19bits, R=20..29bits. The remaining 2 bits are 0.
    Each RGB is 10bits, scale=1023.

    These 3D LUTs indeed serve color calibration of the raw sensor output.
    This works similar to gamma LUT, just that it is cube and one uses
    tri-linear interpolation.

    In my understanding this is how it works:

    1. raw sensor output
    2. black/white/balance
    3. de-bayer/dead pixels/CA/vignette/warp
    4. gamma 1D LUT (see below)
    5. 3D LUT
    6. "smaller scale corrections"

    I simulated up to 5. and the result is extremely similar to JPGs one gets from camera.
    In open-source tools (dcraw...) 4-5 is done with 3x3 color matrix and then gamma.

    In order to visualize 16x16x16 3D LUT I made 256x16 RGB image
    (=re-ordered "pixels") and enlarged it:
    Click image for larger version  Name:	ilustrate_3d_lut.png Views:	122 Size:	17.4 KB ID:	18474

    Top is input color (just index position), while the bottom is output
    color (those 16x16x16 RGB triplets). Any other color is interpolated.
    Here is all 6 3D luts I found on FF8SE:
    Click image for larger version  Name:	ilustrate_ff8se_3d_luts.png Views:	81 Size:	24.5 KB ID:	18475
    I ordered them by "strength". Or how much they differ from an identity
    transform. Top is closest to identity and probably not used on
    camera, while the bottom is strongest. I am not sure that FF8SE is
    using all 6, first 3 look bit weak.


    Zip file with all of luts I found: https://drive.google.com/file/d/1wf5...ew?usp=sharing
    I also made
    corresponding *.cube files that hopefully work.
    More info about *.cube and 3d luts: https://wwwimages2.adobe.com/content...cation-1.0.pdf



    I was curious how do 3D luts look on different cameras. A lot of it is THE SAME!
    It seems that it groups by producer (sony, panasonic, ...) and probably some sub-categories ("newer", "older").

    The biggest group seems to be "newer sony sensors":
    - FF8SE = thieye_t5e = sj8pro = sj7star = gitup_f1

    Some that are unique:
    - xiaomi_yi-22l_23l
    - yi_4k
    - eken_h8pro (Sony IMX078)
    - eken_v8s (Panasonic MN34110/MN34112)
    - ezviz_s1c (Omnivision OV4685)
    - ezviz_s5plus (Panasonic MN34120?)


    About gamma: A12_CC_Reg.bin, size=18752, at offset 0x80 there are 512
    uint32 integers. This is gamma 1d LUT and it is different from
    standard sRGB gamma. I found this exact same gamma LUT even on xiaomi
    yi. Seems to be the same on all ambas?


    Possible applications:
    - Change 3D Lut for better output.
    - Better calibration of raw DNG files (and many cameras share same settings --> seem to me at least).

    To be continued on how to use or misuse these 3D luts...
    Last edited by perapera; 09-05-2020, 03:31 PM.

  • #2
    Important note, if you are experimenting with DNGs, make sure that existing color calibration is removed:
    Code:
    exiftool -ColorMatrix1= -ColorMatrix2= -CalibrationIlluminant1= -CalibrationIlluminant2= FILE.DNG
    The old tool raw2dng does write some color calibration...

    Comment


    • #3
      Originally posted by perapera View Post
      Possible applications:
      - Change 3D Lut for better output.
      - Better calibration of raw DNG files (and many cameras share same settings --> seem to me at least).

      To be continued on how to use or misuse these 3D luts...
      Thank you for sharing!

      Originally posted by perapera View Post
      In my understanding this is how it works:

      1. raw sensor output
      2. black/white/balance
      3. de-bayer/dead pixels/CA/vignette/warp
      4. gamma 1D LUT (see below)
      5. 3D LUT
      6. "smaller scale corrections"
      Stills processing sequence:
      Click image for larger version

Name:	amba_stills.gif
Views:	265
Size:	23.2 KB
ID:	18479
      Donate here if you want to support my efforts and this site.

      Email me if you have any offers, requests or ideas.

      Comment


      • perapera
        perapera commented
        Editing a comment
        Ahhh. That looks very very usefull! Can you share the source of that image?

        I was guessing the sequence from my knowledge of open source things like dcraw and my small experiments with raw data from FF8SE and Xiaomi. Some of the early steps could be done in different order, imho.

        The way I would reconcile my understanding and your chart:

        "color_correction" == 1D GAMMA LUT and then 3D LUT CUBE. Order is important. 3D LUT has 10bits accuracy. I see no sense of it being applied before gamma.

        "tone_curve" == 256 elements tone_curve.curve_red, tone_curve.curve_green, tone_curve.curve_blue (in my ituner.txt these the are same arrays -> no shift of colors)

        "rgb_to_yuv_matrix" == is standard sRGB->YCbCr (e.g. Y = 0.299 * R + 0.587 * G + 0.114 * B, etc.)

        So yes, I was missing the step with tone_curve, that still looks important enough.

        "chroma_scale" ---> hmmmm, not sure...

      • nutsey
        nutsey commented
        Editing a comment
        It's been taken from the A12 SDK.

    • #4
      Several more LUTs from different Ambarella cams: https://www.dropbox.com/sh/2ct408875...lbnZ7PNCa?dl=1
      Donate here if you want to support my efforts and this site.

      Email me if you have any offers, requests or ideas.

      Comment


      • perapera
        perapera commented
        Editing a comment
        Thanks. I plan to try to extract information from all such files so that DNGs are written fully calibrated. It is easier to change from that, compared to guessing everything.
        And yes, it probably has more sense to name such files by sensor, like you did.
        In my ZIP I placed binaries that contain ONLY 16x16x16xUINT32=16384bytes. I was not sure if the rest of the files is always the same across different cameras or not.

      • nutsey
        nutsey commented
        Editing a comment
        Added Insta360 ONE R LUTs.

    • #5
      The ordinary 12MP images on FF8SE are definitely calibrated with
      StillCc1 file. It is the 2nd strongest 3D LUT from the whole bunch.

      I wanted to be clever and try to find where in the memory does it keep.
      Then I did some memory dumping and searching. Result is only partial success.
      For the first image it keeps StillCc1 on FF8SE_MOD3 at 0xBA3F5C40
      (this is just 16x16x16 cube, the file itself is probably at -0x80).

      Thus I log into telnet and do(*):
      Code:
      sraraw -load 0xBA3F5C40 lut3d-identity-000.bin
      And then click the shutter or do "SendToRTOS photo" in telnet.
      The file lut3d-identity-000.bin is in my ZIP archive, it is just identity transform, made for comparison.
      And again these addresses are for 16384 bytes cube data only, not the whole StillCc1 file.
      Loading "identity file" into memory effectively removes color
      calibration.
      And yes, camera then makes JPG that is dull, just like RAW file
      without color calibration.

      Also useful is old trick to start ftp server, good for putting and grabbing files from camera, while poking in telnet. Execute in telnet:
      Code:
      nohup tcpsvd -u root -vE 0.0.0.0 21 ftpd -w / >> /dev/null 2>&1 &
      and then ftp to the same address (for me "telnet 192.168.42.1" and "ftp 192.168.42.1" work).

      This at least makes possible to test these cubes a bit. Unfortunately
      for the 2nd and later images the RTOS preloads luts and other data
      somewhere else in the cache range, so 0xBA3F5C40 is not effective
      anymore. Would have to dump memory again and search...

      I do not know how our modders who prepare all those FW mods test them,
      but at least this avoids baking whole FW and flashing it.

      (*)
      The program "sraraw" I wrote to exploit /dev/mem trick on linux to
      access RTOS memory and I cross-compiled it for ARM processor. The tool
      can do many things, saving from memory, loading into memory, grabbing
      raw data and writing DNG onto SD card (!), but it is still in testing
      phase...


      Point of this post is that changing 3D luts definitely works!
      And there are ways to test it, without flashing the whole FW.



      Now, how best to edit these 3D luts?

      One idea I read that sounds doable is:
      - Use camera JPG and in a photo editing program see what "simple" commands would improve it. One probably needs a lot of various scenes for testing.
      - Make 16bit/pixel PNG or TIFF from the cube data. 16bit is important to preserve accuracy of 10bit cube data.
      - Repeat same "simple" operations on 16 bit PNG and save it.
      - Remake 16x16x16 uint32 cube from the resulting PNG.
      Photo editing tool has to work in 16bit/pixel mode. This excludes Gimp. PNGs I did (in the above ZIP) are only 8bit and not suitable for this.

      There might be tools that directly read and write *.cube files. I read that "g'mic" tool might grok *.cube or other 3D lut formats.

      A different approach that I will try: use ColorChecker taget to calibrate in a more traditional way.

      Comment


      • #6
        These posts of mine are very technical.
        But if anybody tries to work on color output of these ambarella boards it might come very useful.



        Small corrections of my previous words:

        Before image taking RTOS preloads luts and other data into cache (0xb3180000 - 0xbcdfffff).
        Thus one can replace 3d lut with something else, then click shutter.
        After image is saved the cache is again loaded with same luts and data.
        But I was mistaken that it is at different addresses.
        It is always at the same address in the cache:
        but camera has low ISO and high ISO modes, and if ISO is on auto it might be switching the two regimes and thus data locations appear different.
        Low ISO is StillCc1, high ISO is StillCc0.
        Also: Luts for video are the same as for photo.




        I solved the next step in the color calibration: TONE CURVE.

        Now my simulation is very very close to what camera saves as JPG. At least in color appearance.

        It is worth spelling it out:

        linear sensor data --> Gamma 1D LUT --> 3D LUT --> tone curve --> sRGB

        The tone curve one finds in ituner.txt or in 50780 byte files (adj_photo_default_01_Imx117).
        The curve has 3 components but they are the same. It works like 1D LUT.
        In adj_photo_default_01_Imx117 file it is written as 256 uint16 values @ 0x2C4A offset (scaling=1023).
        It is close to identity function: it lifts shadows a bit and darkens bright parts.
        The strange thing is: it is applied in backwards!?!
        Typically with LUTs one calculates resulting Y by linearly interpolating nearby values [y(i),y(i+1)] based on x(i)<=x<x(i+1).
        Here it is in reverse, switch x->y and y->x and then interpolate.
        I am sure about this as doing it in reverse brings the match against camera JPGs to very close, while doing it in forward makes significantly worse match.
        A figure to demonstrate this is below.

        Comment


        • #7
          I also found a very good way to decompose 3D LUTs (so this probably applies to
          "most newer sony sensors": FF8SE=thieye_t5e=sj8pro=sj7star=gitup_f1).

          Ignoring the tone curve that is applied afterwards we have:

          linear_sensor_data --> Gamma1 --> 3D_LUT --> sRGB

          While the traditional "linear" sequence is:

          linear_sensor_data --> ColorMatrix [3x3] --> Gamma2 --> sRGB

          Lets try to put this into an equation (## == "a mathematical operator"
          =~ some kind of matrix operation):

          sRGB = 3D_LUT ## Gamma1 ## linear_sensor_data

          and

          sRGB = Gamma2 ## ColorMatrix ## linear_sensor_data.

          That suggests:

          3D_LUT = Gamma2 ## (ColorMatrix+epsilon) ## Gamma1_Inverse

          Or equivalent:

          ColorMatrix+epsilon = Gamma2_Inverse ## 3D_LUT ## Gamma1

          One can try to fit 3x3 ColorMatrix and hope that residuals (epsilon)
          is small. There is still a question of what should this Gamma2 be?
          It is not the same Gamma1, I tried and the fit is rather poor.
          Actually the 17536 bytes file StillCc1 contains 2 gamma-like-curves,
          just after 16x16x16 3D LUT.
          Bingo, that is indeed the needed function!
          The file contains 2 curves, one is forward Gamma2, the 2nd is inverse
          Gamma2_Inverse.

          The fit is good. Not perfect, but quite good. Here is an example of a
          slice through 3D LUT (scaling is to [0,1]):
          Click image for larger version

Name:	ff8se-stillcc1-sample-small.gif
Views:	192
Size:	21.5 KB
ID:	18495

          Colored lines are from 3D LUT, while dashed black is
          3x3 matrix fit. I have seen by far worse fits.
          Worth noting is that mismatch is more at very dark (~0) and very bright (~1) values.

          Here is my fit for StillCc1 on FF8SE=thieye_t5e=sj8pro=sj7star=gitup_f1
          ColorMatrix={{1.7039737,-0.65606648,-0.062960058},{-0.34198296,1.8711010,-0.47803229},{-0.14684512,-0.76191407,2.0842037}}
          So DNGs can be well color calibrated. But that is not a matrix to put into DNGs, it needs inverse and in RGB-->XYZ space. Topic for later...

          So actually the whole 16x16x16 3D LUT is well described with only 3x3
          numbers that are just traditional color matrix. Plus some small
          residual epsilon. Is this epsilon any important, does it make any
          difference in appearance?

          To test this and to see if one can use the above equation to produce 3D luts:

          3D_LUT == Gamma1_Inverse ## ColorMatrix ## Gamma2

          I calculated corresponding "linearized 3D lut" and put it back into
          the camera. Plus I can simulate both and check how does that hold up
          (I stop right after the tone curve in nutsey's chart from above).
          Here is the result:

          Click image for larger version

Name:	illustrate_2020.08.10-14.25.34-lut3d.jpg
Views:	173
Size:	918.7 KB
ID:	18496

          That is very good match between all 4!
          The sky looks equally bad in all 4 and with the same problems.
          The images do differ. For instance colors of the plastic toys are much stronger (or more saturated) in BOTH camera JPGs.
          But in reality those toys look closer to my simulated images.
          That is perhaps chroma enchacement later in the nutsey's chart.
          I think I would prefer less of such chroma enchacements.
          Another difference is that camera JPG with original 3D lut has touch warmer colors compared to any other image, but you have to flip images in sequence to notice that.

          Comment


          • perapera
            perapera commented
            Editing a comment
            Edit: A clarification if somebody ever tries to reproduce this.
            The plot I showed is of intermediate values. That is what I feed to the fitting routine.
            It seemed to me best to strip known information. Basically from this equation:
            ColorMatrix+epsilon = Gamma2_Inverse ## 3D_LUT ## Gamma1
            The raw values from 3D_LUT are much smoother or gentler than that. But that smoothness is from Gamma1 and Gamma2. The kernel is, mostly, 3x3 color matrix which is either linear between (0,1), or flat when saturates at 0 or 1.

        • #8
          I am at a point where I could use some community help!
          One of my goals is to produce (another) desktop tool for raw->dng conversion and also a an ARM tool that EXECUTES on Ambarella and dumps DNGs within camera.
          Both work, it is same C code compiled for desktop or for ARM. I have a working version: my FF8SE can dump properly calibrated DNGs :-----) Might be a first one for cheap action cameras.

          What I am trying to figure out is what is a safest DNG version that will be acceptable and properly understood by various software. There are simply many ways things can be written into DNG.

          If you can please download the following ZIP archive with 5 DNGs:
          https://drive.google.com/file/d/1Kcg...ew?usp=sharing
          Try to use your favorite software and make 16bit pngs or tiffs and send to me. If not then use jpg.
          But try to keep processing and options on minimal or default.
          That example is not a good image. With just two clicks: "auto WB" and "+1EV" I can get much better JPGs.
          However I am trying to make sure that DNG is as close to camera output as possible. Most values are taken from camera, including WB, which is not the best. We can improve on it later.
          Don't bother with open source tools like dcraw/ufraw/rawtherapee/darktable.

          Another interesting thing, it seems that camera uses very bright gamma curve compared to either sRGB gamma or other choices. So most camera images will look darker as DNGs. I think. Will check more.


          Comment


        • #9
          That old amba SDK archive has bunch of 3D Luts in exactly the same 17536 bytes format:
          Code:
          https://gitlab.com/hugowhiteflame/ambarella-sdk/-/tree/master/ambarella/prebuild/imgproc/img_data/arch_s2l/idsp


          Thanks nutsey for the conversion! If somebody else has a different software please also contribute.

          Comment


          • #10
            I think I found one more step in the camera working: the black levels. In that old amba sdk archive
            Code:
            https://gitlab.com/hugowhiteflame/ambarella-sdk/-/blob/master/ambarella/include/arch_s2l/AmbaDSP_ImgFilter.h#L78
            there is a definition that 4 black levels are INT16.
            In ituner.txt those are called (r_black,g_r_black,g_b_black,b_black).
            Given that they are same or nearly the same and negative in value (in ituner.txt), around -800 and that in rom files there are not that many negative numbers, it was easy to find where these sit.

            Code:
             offset     BLACK(r,g_r,g_b,b)    size  filename
            0x00264E  -800  -800  -800  -800  49440 FF8SE-Rom/adj_still_default_01_Imx117
            0x00264E  -800  -800  -800  -800  49440 THIEYE_T5E_V50-Rom/adj_still_default_01_Imx117
            0x00264E  -800  -800  -800  -800  49440 SJ7STAR_FW_V121-Rom/adj_still_default_01_Imx117
            0x00264E  -800  -800  -800  -800  49440 GITUP_F1_V12-Rom/adj_still_default_01_Imx317
            0x00264E  -793  -793  -793  -793  49440 EKEN_H8PRO_V15-Rom/adj_still_default_01_Imx078
            0x00264E -1023 -1023 -1023 -1023  49440 EKEN_V8S-Rom/adj_still_default_01_Mn34112
            0x00264E  -252  -252  -252  -252  49440 EZVIZ_S1C-Rom/adj_still_default_01_Ov4689
            0x0013B8  -802  -802  -802  -802 130800 XIAOMI_YI-Rom/param_adj_still_low_iso_param_release.bin
            0x591D6E  -800  -800  -800  -800 ****** EZVIZ_S5Plus.bin
            0x******  -800  -800  -800  -800 ****** YI_4K_V1109.bin
            Stars mean that I could not unpack FW.bin, and I just searched the whole FW for a pattern (also, these numbers in all cases are followed with 4 zeros).
            I think that all of this fits!

            It is relatively easy to check black level on a camera. Put ISO100,
            close lens, wrap in blanket, make RAW and examine individual R,G and B
            bayer pixels. FF8SE indeed has mean values of pixels at 800 or 799,
            and Xiaomi about 804. I think that all this fits.

            For higher ISO I think that those adj files have some definition of level dependence on ISO. Anyway, this is good enough to put into DNG.
            What is not clear to me is whether, for instance Mn34112 on Novatek board will also have black=1023. I could imagine that boards may drive sensor amplification in an adjustable way....

            Comment


            • #11
              Originally posted by perapera View Post
              What is not clear to me is whether, for instance Mn34112 on Novatek board will also have black=1023. I could imagine that boards may drive sensor amplification in an adjustable way....
              Novatek cams use 8bit values for black level. Here we've got 10 bits. UPDATE: Not 8 and 10 bits, but 10 and 12! Thanks hc1982 for correcting in comments below.
              Last edited by nutsey; 08-22-2020, 02:59 PM.
              Donate here if you want to support my efforts and this site.

              Email me if you have any offers, requests or ideas.

              Comment


              • #12
                Originally posted by perapera View Post
                I have a working version: my FF8SE can dump properly calibrated DNGs
                How long does it take to convert one RAW to DNG?
                Donate here if you want to support my efforts and this site.

                Email me if you have any offers, requests or ideas.

                Comment


                • #13
                  The app can both read rtos memory and RAW files from the disk.
                  I have here tests for execution times when reading from RTOS memory
                  (this was run on FF8SE inside telnet, but autoexec.ash works as well):
                  Code:
                  16bit lossless exe_time=0.848s real_time=2.630s file=23.4MB
                  12bit lossless exe_time=0.673s real_time=2.052s file=17.6MB
                  11bit lossy    exe_time=1.681s real_time=2.937s file=16.1MB
                  10bit lossy    exe_time=1.595s real_time=2.695s file=14.7MB
                   9bit lossy    exe_time=1.557s real_time=2.793s file=13.2MB
                   8bit lossy    exe_time=1.351s real_time=3.237s file=11.7MB
                  This exe_time is processor time, but real_time is the wall clock: the
                  difference is probably time that takes to write to the SD card. 16bit
                  and 12bit is simple, minimal processor time. But compressing it means
                  packing bits and that adds time (though smaller file gets written to
                  the SD cards so one gains there a bit).
                  9bit is probably limit of "decent quality".

                  Simple tests when going through those 12MP pixels and searching
                  something (histogram, estimating levels, ...) adds about 0.6s - 1.2s
                  of execution time.

                  I did tests with deflate from ZLIB library. DNGs can have deflate
                  compression. Managed to compile with zlib, but execution times are
                  nuts. Lowest compression and it takes close to 20s. And file size is
                  only bit smaller.

                  EDIT: ahh, important to note. My first version took >10s. But I figured that it is faster to make 1MB buffers and write to SD card big chunks. Also take at least 4, 8 or 16 bytes at once from memory, don't do byte by byte... The resulting code is not nice looking...
                  Last edited by perapera; 08-22-2020, 08:59 AM.

                  Comment


                  • nutsey
                    nutsey commented
                    Editing a comment
                    12bit lossless looks very promising. BTW 8MP 16:9 RAWs use 10bit sensor mode so probably it can be saved to 10bit losslesss DNG.

                  • perapera
                    perapera commented
                    Editing a comment
                    Good to know!

                    Plus this explains one of my "bugs" --> not all bits were where I expected.
                    Now I fast checked one such file. Indeed, it is 10bit.
                    But there are only 676 distinct levels, effectively it is almost 9bit. Hmmm, any idea why? I should make a new test set in full daylight to be sure that it is well filled.

                    I wonder if there are 8bit RAWs. You mentioned Novatek and 8bit processing?

                    This convinces me that default has to be "auto" mode and loose additional 1s - 2s of processor time to check levels and where the bits are and how many. But I will also have "fast" mode.


                    ALSO: if anybody is interested an example of produced 12bit DNG is in the above ZIP archive that nutsey processed. It was converted on desktop, but the code is the same and it does work on camera.

                  • hc1982
                    hc1982 commented
                    Editing a comment
                    Raws from Novatek are usually 12-bits, and image seems to be processed in 10-bits by isp. Same applies for OB (black level) values (10-bits), while Amba is using 12-bits for its values.
                    Great research on Amba luts! I have done some investigations on this topic previously too and have came up with similar results, but I have missed gamma from cc_reg.bin, so I had to reconstruct that missing part from grey level of lut. But my results are bit different from yours, especially compared to your color curves values on your chart which shifts too much from zero, while starting right from zero in my interpretation, although its form is quite similar in general.

                  • perapera
                    perapera commented
                    Editing a comment
                    hc1982: thanks for the comment.
                    The plot I showed (in the above post) is of intermediate values. The values I feed to the fitting routine.
                    It seemed to me best to strip known information. Basically from this equation:
                    ColorMatrix+epsilon = Gamma2_Inverse ## 3D_LUT ## Gamma1
                    If you would like I could provide a plot with more "raw" curves --> my guess would be that our results probably match better than it seems.

                • #14
                  One of the annoying questions about RAWs is to provide dimensions.
                  Here is my attempt at making some automatic guessing based on the file size ALONE.

                  My data set are all possible raw2nef.ini settings I could find:
                  Code:
                  10077696, 0, 2592, 1944, 6, 5184, 16, 4, Astak_ActionPro3
                  15980544, 0, 3264, 2448, 7, 6528, 16, 4, GoPro_HERO_2014
                  16588800, 0, 3840, 2160, 7, 7680, 16, 4, Firefly_8S_8M_V72
                  17694720, 0, 4096, 2160, 7, 8192, 16, 4, Firefly_8S_8M_V69
                  22118400, 0, 3840, 2880, 7, 7680, 16, 4, Firefly_8S_12M_V69
                  22426624, 0, 3872, 2896, 7, 7744, 16, 4, Firefly_7S
                  23887872, 0, 4608, 2592, 7, 9216, 16, 4, Git2_(16:9)
                  24000000, 0, 4000, 3000, 0, 8000, 16, 4, GOPRO-H3BE
                  24000000, 0, 4000, 3000, 0, 8000, 16, 4, GoPro_H3BE
                  24000000, 0, 4000, 3000, 7, 8000, 16, 4, Firefly_8S_12M_V72
                  24064000, 0, 4000, 3008, 7, 8000, 16, 4, SJCAM_SJ8_Pro
                  24192768, 0, 4624, 2616, 7, 9248, 16, 4, Git2P_(16:9)
                  24576000, 0, 4096, 3000, 7, 8192, 16, 4, FF8SE_MOD3
                  31850496, 0, 4608, 3456, 7, 9216, 16, 4, Git2_(4:3)
                  31850496, 0, 4608, 3456, 7, 9216, 16, 4, XIAOMI-YI
                  31850496, 0, 4608, 3456, 7, 9216, 16, 6, Git2P_M
                  32257024, 0, 4624, 3488, 7, 9248, 16, 4, Git2P_(4:3)
                  It seems safe assumption that width (=NX) is always multiple of 16 and height (=NY) is multiple of 8.
                  Thus PITCH=2*NX (the size of the single line in bytes).
                  The only input is our guess of the aspect ratio (4:3, 16:9, ...):

                  Here is a simple function in IDL (kind of Matlab, easy to translate to anything):
                  Code:
                  ;; example usage:  dim =  guess_dim_from_size( 10077696L, 16, 8, 1.32 )
                  function guess_dim_from_size,size,xmod,ymod,aspect_r
                  
                  if( (size MOD 2) ne 0 )then message,'ERROR: size not divisible by 2!'
                  s  = LONG(size)/2
                  if( (s MOD xmod) ne 0 || (s MOD ymod) ne 0 )then message,'ERROR: size not divisible by 2*xmod*ymod!'
                  s /= xmod*ymod
                  
                  ay = SQRT(size/2/float(aspect_r))/float(ymod)
                  
                  ay = FLOOR(ay+2)
                  ;; ay-- ===> search down, to increasing aspect ratios
                  while( (s/ay)*ay ne s )do ay--
                  ax = s / ay
                  
                  nx = ax*xmod
                  ny = ay*ymod
                  return,[nx,ny]
                  end

                  Here is result testing against raw2nef.ini (starting with ASPECT_START=1.32)
                  Code:
                  ASPECT_START=1.32000
                  size=10077696 --> 2592x1944 aspect=1.333 GOOD
                  size=15980544 --> 3264x2448 aspect=1.333 GOOD
                  size=16588800 --> 3456x2400 aspect=1.440 BAD! (true: 3840x2160 aspect=1.778)
                  size=17694720 --> 3456x2560 aspect=1.350 BAD! (true: 4096x2160 aspect=1.896)
                  size=22118400 --> 3840x2880 aspect=1.333 GOOD
                  size=22426624 --> 3872x2896 aspect=1.337 GOOD
                  size=23887872 --> 4608x2592 aspect=1.778 GOOD
                  size=24000000 --> 4000x3000 aspect=1.333 GOOD
                  size=24000000 --> 4000x3000 aspect=1.333 GOOD
                  size=24064000 --> 4000x3008 aspect=1.330 GOOD
                  size=24192768 --> 4624x2616 aspect=1.768 GOOD
                  size=24576000 --> 4096x3000 aspect=1.365 GOOD
                  size=31850496 --> 4608x3456 aspect=1.333 GOOD
                  size=32257024 --> 4624x3488 aspect=1.326 GOOD
                  That is very good!
                  Even for higher aspect ratio there is often no solution at 4:3 and function finds correct answer at higher value.

                  The 2 failed examples are easily solved by a better guess (ASPECT_START=1.75):
                  Code:
                  ASPECT_START=1.75000
                  size=16588800 --> 3840x2160 aspect=1.778 GOOD
                  size=17694720 --> 4096x2160 aspect=1.896 GOOD
                  size=23887872 --> 4608x2592 aspect=1.778 GOOD
                  size=24192768 --> 4624x2616 aspect=1.768 GOOD
                  Given that very often there is a corresponding JPG even aspect ratio can be completely automatically guessed.
                  In any case if noting else is known, and user doesn't provide a guess, then ASPECT_START=1.32 seems the best choice.

                  Comment


                  • #15
                    I have done some investigations on this topic previously too and have came up with similar results, but I have missed gamma from cc_reg.bin, so I had to reconstruct that missing part from grey level of lut.
                    That was good thinking. Indeed slice across gray is independent of the color transformation. And will tell you what is gamma.
                    Here I produce "gray slice" of the original LUT and my fit, and in two different spaces:

                    Click image for larger version

Name:	illustrate_gray_slice-small.png
Views:	144
Size:	58.9 KB
ID:	18580
                    The left is "original 3D LUT" space. But right is linearized or "de-convolved" ( Gamma2_Inverse ## 3D_LUT ## Gamma1 ) space.
                    I found it easier to grasp and fit in this linear space. Plus it shows that the kernel of the transformation is linear (plus "small epsilon" on that linear).
                    I made biggest progress when I realized that TWO different gamma functions are involved. One single gamma never gives good fit.

                    Comment

                    Working...
                    X