You are on page 1of 6

overlay.h, hardware/libhardware/include/hardware. overlay.cpp, hardware/libhardware/modules/overlay. How a new hardware will implement overlay rendering?

Meaning how can that be done on new SoC? Is it there on Pandaboard? Download the tech-ref for pandaborad CPU. Files:

1. This is the reason for having the O_APPEND mode for open(). If this mode is set, every append operation is guaranteed by the system to be atomic. The liboverlay has functionally two modules. Data module and Control Module. The control module is controlled by SurfaceFlinger, where as the Data module is controlled by MediaServer. 1a. This call will come here as a result of onVideoEvent, for the first time. When onVideoEvent comes for the firs time, TIHardwareRenderer(libstagefrighthw) would be initialised. From it's constructor the call would come here in SurfaceFlinger through binder to call createOverlay. SurfaceFlinger will call this createOverlay though the overlay_control_device_t object which it would have got while opening the overlay as a control_device. a. Here we create the shared memory region, through ashmem_create_region, mmap it and cast it to (overlay_shared_t *) and preserve the fd. b. Then we will do a open() on /dev/video1 or /dev/video2 depending upon the argument passed. c. Then we will open and configure the resizer, d. The overlay format is also set here, through a call to VIDIOC_G_FMT, VIDIOC_S_FMT e. set Rotation and setCrop is done here. 1. Then we request 3 (MMAPED later from data side) buffers with v4l2 driver.VIDIOC_REQBUFS. This gives us how many buffers were allocated. These buffers will be mmapped from data side, when overlay_initialize would be called.VIDIOC_QUERYBUF and then MMAP 2. Then we create an instance of overlay_object , which will have a member of type handle_t derived from native_handle. This handle_t object would record the control fd(obtained when opening /dev/video1 from control side), shared memory fd, and rezizer fd, w, h , format, number of buffers returned by v4l2, size of shared object. we return this instance of overlay_object from here. * Then we create an instance of OverlayRef class and also pack an overlayChannel object into it. This overlayChannel object is basically a binder, which can invoke destroyOverlay on control side, i.e surface flinger. This it looked from code, that destroyOverlay can be called either from surface flinger, or it can be called from MediaServer also through binder IPC through OverlayChannel object. * When we return this OverlayRef object as Parcel from SurfaceFlinger towards MediaServer, this inturn calls data.writeNativeHandle on SurfaceFlinger side and call to data.readNativeHandle on MediaServer side. Now while call to writeNativeHandle, all the three fds are duplicated through dup system call.The ctlfd, sharedfd, resizerfd.

SATRT HERE 1. As a result of call to createOverlay(), Inside Surface flinger an object of type overlay_object_t is created and (this also has the sharedfd and resizer fd packed within). here a Overlayref object is also created which is returned from the Binder call. 2. This call actually happens through Binder interface from TIHardwareRenderer. We come here as a result of OnVideoEvent in AwesomePlayer for the first time, when renderer is not initialised. 3. As a reply to this Binder interface call a Overlayref object is returned from the SurfaceFlinger to MediaServer(in TIHardwareRenderer) 4. Here we create a Object of Overlay::Overlay (frameworks/base/libs/ui/Overlay.cpp) from the returned Overlayref. 5. From this constructor, these get called overlay_device_open (opened as overlay_data_device.) overlay_initialize(the handle to overlay is passed here, with opened overlay_data_device_t) The native_handle, which got created there in SurfaceFlinger and received here through Binder IPC is being passed here now. here we create a overlay data context and record all the information received from SurfaceFlinger as a native_handle. here the buffers returned by v4l2 would be mapped, ofcourse VIDIOC_QUERYBUF is issued and we get the device addresses for the buffers to be mapped. This is returned as v4l2_buffer.m.offset. We fill the index ant type while passing the structure to the IOCTL. overlay_initialize will internally call open_shared_data How do we know , what to open? The macro OVERLAY_HARDWARE_MODULE_ID, tell that(some dlopen mechanism). 6. Ater creating the Overlay, we call mOverlay->setParameter(CACHEABLE_BUFFERS, 0); from TIHardwareRenderer.cpp 7. Then for every Buffer we call overlay_getBufferAddress and before every call to this, we call overlay_getBufferCount to know the total number of buffers for this overlay. In every call to overlay_getBufferAddress we do v4l2_overlay_query_buffer and generate a v4l2_buffer. - Applications set the type field of a struct v4l2_buffer to the same buffer type as previously struct v4l2_format type and struct v4l2_requestbuffers type, and the index field. Valid index numbers range from zero to the number of buffers allocated with VIDIOC_REQBUFS (struct v4l2_requestbuffers count) minus one. After calling VIDIOC_QUERYBUF with a pointer to this structure drivers return an error code or fill the rest of the structure. - In the flags field the V4L2_BUF_FLAG_MAPPED, V4L2_BUF_FLAG_QUEUED and V4L2_BUF_FLAG_DONE flags will be valid. The memory field will be set to V4L2_MEMORY_MMAP, the m.offset contains the offset of the buffer from the start of the device memory, the length field its size. The driver may or may not set the remaining fields and flags, - When memory is V4L2_MEMORY_MMAP this is the offset of the buffer from the start of the device memory. The value is returned by the driver and apart of serving as parameter to the mmap() function not useful for applications.

The ctx->mapping_data, here we basically get the addresses of mapped buffers and keep with us here in TIHardwareRenderer object. These addresses and other information relaed to the buffers would be used later to copy the video data in the buffers and queuing them. is returned from here and is pushed in a vector mOverlayAddresses at calling place. And Now the playback keep happening. (It would be interesting to know, how these buffers are allocated, filled and then displayed.) Q:Now, whats the role of surface flinger through control interface. A: Through control interface the following functions would be called createOverlay destroyOverlay setPosition getPosition setParameter commit stage Also throuh data interface the following functions would be called. initialize resizeInput frameCopy setCrop getCrop setParameter dequeueBuffer queueBuffer getBufferAddress getBufferCount Q:How do we get hold of control device and use it to call some Overlay.cpp functions from SurfaceFlinger. A:The Surface flinger would have already opened it as a control device(overlay_control_device_t), when Surface flinger gets initialsed. It would have even opened the Framebufer device for graphics.framebuffer_device_t

Q. When is the first time the overlay is displayed.(knowing what v4l2 call does this, will help) A. This would be done by a call to enable_streaming. This call would come, when qBuffer is called for first time. This would internally call the ioctl VIDIOC_STREAMON Q. How is the overlay taken off. (knowing what v4l2 call does this, will help) A. This will be taken off by a call to disable_streaming_locked, which would be called from overlay_destroyOverlay. This would actually result into calling, VIDIOC_STREAMOFF of the v4l2 driver.

Q How surface flinger would have made the surface being displayed on the graphics plane as fully transaparent? RGBA values? A. Q. What is setCrop, getCrop, commit and stage, setParameter? When they would be called? A. Limitatoins of overlay from hardware perspective... Q. How SurfaceFlinger controls Graphics overlay.How are the overlay buffers managed.(Write this) A. Whenever there is call to TIHardwareRenderer::render, we check if all the buffers are in queue If yes we call dqbuffer and get the index of the buffer that is being dqed. then we do a color conversion if required from YUV420 to YUV 422, for software decoder. Then we do a memcopy of the data to be rendered on to the overlay addresses referend to by the above index. And now call qBuffer for the particular index. If no then we increment the index and then we do a color conversion if required from YUV420 to YUV 422, for software decoder. Then we do a memcopy of the data to be rendered on to the overlay addresses referend to by the above index. And now call qBuffer for the particular index.

Q. How the Data module and the control module make use of the shared data? Thats also very important to understand.(Write this) A.

typedef struct { uint32_t marker; //This is basically used as a marker to crisscheck in data side, whether we are referfing to a correct //shared memory chunk. uint32_t size; // This basically the size of a page in memory, typically 4096 bytes. This would be shared from control side to data side // so that the data side knows, by how much size to unmap while overlay destroy overlay is called. volatile int32_t refCnt; // This refcount is made 1 , when the shared data is created for the first time from the control side // This is atomically incremented from the data side when this shared dta is opend from the data side. // Whenever there is a call to destroy_shared_data() which can come either from control side through overlay_destroyOverlay() // or from data side through close_shared_data() // It is amde sure that the last side which is coming to clse the shared data can only destroy the mutex, else the other side would be

// held up waiting on a mutex which has been already destroyed. Last side deallocated releases the mutex, otherwise the remaining // side will deadlock trying to use an already released mutex if (android_atomic_dec(&shared->refCnt) == 1) { if (pthread_mutex_destroy(&shared->lock)) { LOGE("Failed to Close Overlay Semaphore!\n"); }

uint32_t controlReady; // Only updated by the control side, set to one when overlay_commit(disable streaming, sets rotation and position, enable streaming) is called and set to zero in overlay create_overlay. uint32_t dataReady; // Only updated by the data side //when we call overlay_initialize this is set to 0. //whenever disable_streaming_locked is calleed , this is set to 0 // whenever overlay_setCrop is ccalled this is set to 1. // Wheenever qbuffer is called , it is set o 1, if setCrops was not called. // when overlay data close is called from data side, this is set to 0 //overlay_resize_input returns -1 if this is set. // overlay setParameters return -1 if this is set.

1. every time before enabling the streaming it is checked that whether both data and control side are ready, else astreaming is not enabled. 2. pthread_mutex_t lock; //// Basically the enabling and disabling of stremaing and the flags streamEn and streamingReset are modified under this lock. // the controlReady and streaming ready flags are modified under this lock uint32_t streamEn; //When enable streaaming_locked is called, this is set to 1 uint32_t streamingReset; //whenever we disable the overlay window by calling disable_streaming_off this is set to 1 and sreamingEn is set to 0. } overlay_shared_t;

How exactly the shared object is shared between the processes and how is it accessed on the other side.5 The descriptor to the shared memory allocated using ashmem is shared over binder interface with the other process. Internally dup() system call is used to duplicate the descriptor pointing to same memory. Find out, whats the relation with the Surface, set by call to setVideoSurface. *****

You might also like