Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python documentation #90

Open
dinapappor opened this issue Feb 3, 2024 · 15 comments
Open

Python documentation #90

dinapappor opened this issue Feb 3, 2024 · 15 comments

Comments

@dinapappor
Copy link

Hello! I know picamera2 exists. But I really do not like to use it. It's very high level and hides a lot of stuff which I do not like. I'd like to use libcamera very much like I used picamera (the original) mmalobjsee https://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html#performance-hints for instance where I could go very low level. It would be good if there could be such documentation.

Thanks! :)

@kbingham
Copy link
Owner

kbingham commented Feb 4, 2024

The python bindings should aim to closely match the C++ implementation. You're right I don't think we have explicit 'python' documentation for this. Is there a way you would anticipate that this should be provided separately?

@dinapappor
Copy link
Author

dinapappor commented Feb 4, 2024 via email

@dinapappor
Copy link
Author

I've been trying to use libcamera for a few days and I realized I must mmap stuff.

I've taken simple-cam.py example and tried to read that into an image to overlay another image.

In process_request() I've done this:

def process_request(request):
    global camera
    global dashcam_title_image

    print()


    # When a request has completed, it is populated with a metadata control
    # list that allows an application to determine various properties of
    # the completed request. This can include the timestamp of the Sensor
    # capture, or its gain and exposure values, or properties from the IPA
    # such as the state of the 3A algorithms.
    #
    # To examine each request, print all the metadata for inspection. A custom
    # application can parse each of these items and process them according to
    # its needs.

    #requestMetadata = request.metadata
    #for id, value in requestMetadata.items():
    #    print(f'\t{id.name} = {value}')

    # Each buffer has its own FrameMetadata to describe its state, or the
    # usage of each buffer. While in our simple capture we only provide one
    # buffer per request, a request can have a buffer for each stream that
    # is established when configuring the camera.
    #
    # This allows a viewfinder and a still image to be processed at the
    # same time, or to allow obtaining the RAW capture buffer from the
    # sensor along with the image as processed by the ISP.

    buf = None
    buffers = request.buffers
    length = 0
    for _, buffer in buffers.items():
        metadata = buffer.metadata
        #print(dir(buffer.planes))

        plane = buffer.planes[0]
        buf = mmap.mmap(plane.fd, plane.length, mmap.MAP_SHARED,
                        mmap.PROT_WRITE | mmap.PROT_READ)

        img = Image.frombuffer(
                        'L',
                        (800,600),
                        buf,
                        'raw',
                        'L',
                        0,
                        1
                        )

        #print(dir(img))
        img.paste(
                        dashcam_title_image,
                        (0, 0)
                        )
        img.save("test.png")
        exit()

        # Print some information about the buffer which has completed.
        #print(f' seq: {metadata.sequence:06} timestamp: {metadata.timestamp} bytesused: ' +
        #      '/'.join([str(p.bytes_used) for p in metadata.planes]))

        # Image data can be accessed here, but the FrameBuffer
        # must be mapped by the application

    # Re-queue the Request to the camera.
    request.reuse()
    camera.queue_request(request)

But the result is kind weird, I am getting:

test

The overlay gets added correctly but the rest of the image is seemingly garbage.

The camera is connected correctly, because if I use libcamera-vid or something I get proper video.

@kbingham
Copy link
Owner

It's hard to see what's going on from only a small snippet. If you're getting raw data, then does that mean you are asking for the RAW bayer data from the camera? Have you configured a pixel format correclty? Or checked that the pixel format that is returned is what you expect?

@dinapappor
Copy link
Author

It's hard to see what's going on from only a small snippet. If you're getting raw data, then does that mean you are asking for the RAW bayer data from the camera? Have you configured a pixel format correclty? Or checked that the pixel format that is returned is what you expect?

I have taken simple-cam and only modified process_request and process_request is what you can see in it's entirety in the snippet above.

How can I know what pixelformats are available as far as libcamera in python goes?

@dinapappor
Copy link
Author

Okay, I managed to set pixelformat to yuv420 (and size!) and was able to actually get a picture.

test

Now I just need to figure out how to get the picture in color.

In my old code based on mmal (mmalobj) I just specified yuv420 as format and then I would get a picture in color which I pasted an image over and then sent it back to the output ports for encoding. Is something similar required here?

@kbingham
Copy link
Owner

a greyscale image suggests you're only getting a single plane from the YUV420 perhaps?
Can you check to see if anything is set on plane[1] ?

@dinapappor
Copy link
Author

Can you check to see if anything is set on plane[1] ?

It is, but it seems to be too big for Image.frombuffer() :(

@kbingham
Copy link
Owner

Then that probably means plane[1] might not be real. Does the python interface tell you how many planes there are ?

I wonder if https://git.libcamera.org/libcamera/libcamera.git/tree/src/py/cam might be a way for you to find better examples to what you need, as that's more of a fully implemented python equivalent of our C++ cam implementation.

@dinapappor
Copy link
Author

Then that probably means plane[1] might not be real. Does the python interface tell you how many planes there are ?
It seems kinda inconsistent. if I do print(len(buffer.planes)) it sometimes give me 0 or sometimes 3.

@dinapappor
Copy link
Author

Also, thanks for the link. I'll try to play around with it! 🙇‍♂️

@dinapappor
Copy link
Author

Seems that examples in the other repo you provided I face the same issue as with https://github.com/kbingham/libcamera/blob/master/src/py/examples/simple-continuous-capture.py

There is no libcamera.utils and thus no MappedFrameBuffer.

@kbingham
Copy link
Owner

It's here:
https://git.libcamera.org/libcamera/libcamera.git/tree/src/py/libcamera/utils/MappedFrameBuffer.py
That sounds like maybe it hasn't been installed, or isn't on the path or such.

@dinapappor
Copy link
Author

Maybe there's a discrepancy between what is installed with Raspberry PI os and libcamera VCS? Because I can't import it.

@kbingham
Copy link
Owner

Just copy that MappedFrameBuffer.py into your project. I don't think it's exposed as part of the public API, it's just a utility written.
I've argued about this on the lists before that I think we should provide helpers to map buffers, but there are always caveats and it wasn't accepted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants