libcamera v0.4.0+5314-fc77c53d-nvm
Supporting cameras in Linux since 2019
|
Namespace for rpi controls. More...
Enumerations | |
enum | SyncModeEnum { SyncModeOff = 0 , SyncModeServer = 1 , SyncModeClient = 2 } |
Supported SyncMode values. More... | |
Variables | |
const Control< bool > | StatsOutputEnable |
Toggles the Raspberry Pi IPA to output the hardware generated statistics. | |
const Control< Span< const uint8_t > > | Bcm2835StatsOutput |
Span of the BCM2835 ISP generated statistics for the current frame. | |
const Control< Span< const Rectangle > > | ScalerCrops |
An array of rectangles, where each singular value has identical functionality to the ScalerCrop control. This control allows the Raspberry Pi pipeline handler to control individual scaler crops per output stream. | |
const Control< Span< const uint8_t > > | PispStatsOutput |
Span of the PiSP Frontend ISP generated statistics for the current frame. This is sent in the Request metadata if the StatsOutputEnable is set to true. The statistics struct definition can be found in https://github.com/raspberrypi/libpisp/blob/main/src/libpisp/frontend/pisp_statistics.h. | |
const Control< Span< const float > > | CnnOutputTensor |
This control returns a span of floating point values that represent the output tensors from a Convolutional Neural Network (CNN). The size and format of this array of values is entirely dependent on the neural network used, and further post-processing may need to be performed at the application level to generate the final desired output. This control is agnostic of the hardware or software used to generate the output tensors. | |
const Control< Span< const uint8_t > > | CnnOutputTensorInfo |
This control returns the structure of the CnnOutputTensor. This structure takes the following form: | |
const Control< bool > | CnnEnableInputTensor |
Boolean to control if the IPA returns the input tensor used by the CNN to generate the output tensors via the CnnInputTensor control. Because the input tensor may be relatively large, for efficiency reason avoid enabling input tensor output unless required for debugging purposes. | |
const Control< Span< const uint8_t > > | CnnInputTensor |
This control returns a span of uint8_t pixel values that represent the input tensor for a Convolutional Neural Network (CNN). The size and format of this array of values is entirely dependent on the neural network used, and further post-processing (e.g. pixel normalisations) may need to be performed at the application level to generate the final input image. | |
const Control< Span< const uint8_t > > | CnnInputTensorInfo |
This control returns the structure of the CnnInputTensor. This structure takes the following form: | |
const Control< Span< const int32_t, 2 > > | CnnKpiInfo |
This control returns performance metrics for the CNN processing stage. Two values are returned in this span, the runtime of the CNN/DNN stage and the DSP stage in milliseconds. | |
const std::array< const ControlValue, 3 > | SyncModeValues |
List of all SyncMode supported values. | |
const std::map< std::string, int32_t > | SyncModeNameValueMap |
Map of all SyncMode supported value names (in std::string format) to value. | |
const Control< int32_t > | SyncMode |
Enable or disable camera synchronisation ("sync") mode. | |
const Control< bool > | SyncReady |
When using the camera synchronisation algorithm, the server broadcasts timing information to the clients. This also includes the time (some number of frames in the future, called the "ready time") at which the server will signal its controlling application, using this control, to start using the image frames. | |
const Control< int64_t > | SyncTimer |
This reports the amount of time, in microseconds, until the "ready
time", at which the server and client will signal their controlling applications that the frames are now synchronised and should be used. The value may be refined slightly over time, becoming more precise as the "ready time" approaches. | |
const Control< int32_t > | SyncFrames |
The number of frames the server should wait, after enabling SyncModeServer, before signalling (via the SyncReady control) that frames should be used. This therefore determines the "ready time" for all synchronised cameras. | |
Namespace for rpi controls.
Supported SyncMode values.
|
extern |
Span of the BCM2835 ISP generated statistics for the current frame.
This is sent in the Request metadata if the StatsOutputEnable is set to true. The statistics struct definition can be found in include/linux/bcm2835-isp.h.
|
extern |
Boolean to control if the IPA returns the input tensor used by the CNN to generate the output tensors via the CnnInputTensor control. Because the input tensor may be relatively large, for efficiency reason avoid enabling input tensor output unless required for debugging purposes.
|
extern |
This control returns a span of uint8_t pixel values that represent the input tensor for a Convolutional Neural Network (CNN). The size and format of this array of values is entirely dependent on the neural network used, and further post-processing (e.g. pixel normalisations) may need to be performed at the application level to generate the final input image.
The structure of the span is described by the CnnInputTensorInfo control.
|
extern |
This control returns the structure of the CnnInputTensor. This structure takes the following form:
constexpr unsigned int NetworkNameLen = 64;
struct CnnInputTensorInfo { char networkName[NetworkNameLen]; uint32_t width; uint32_t height; uint32_t numChannels; };
where
networkName is the name of the CNN used, width and height are the input tensor image width and height in pixels, numChannels is the number of channels in the input tensor image.
|
extern |
This control returns a span of floating point values that represent the output tensors from a Convolutional Neural Network (CNN). The size and format of this array of values is entirely dependent on the neural network used, and further post-processing may need to be performed at the application level to generate the final desired output. This control is agnostic of the hardware or software used to generate the output tensors.
The structure of the span is described by the CnnOutputTensorInfo control.
|
extern |
This control returns the structure of the CnnOutputTensor. This structure takes the following form:
constexpr unsigned int NetworkNameLen = 64; constexpr unsigned int MaxNumTensors = 8; constexpr unsigned int MaxNumDimensions = 8;
struct CnnOutputTensorInfo { char networkName[NetworkNameLen]; uint32_t numTensors; OutputTensorInfo info[MaxNumTensors]; };
with
struct OutputTensorInfo { uint32_t tensorDataNum; uint32_t numDimensions; uint16_t size[MaxNumDimensions]; };
networkName is the name of the CNN used, numTensors is the number of output tensors returned, tensorDataNum gives the number of elements in each output tensor, numDimensions gives the dimensionality of each output tensor, size gives the size of each dimension in each output tensor.
|
extern |
Span of the PiSP Frontend ISP generated statistics for the current frame. This is sent in the Request metadata if the StatsOutputEnable is set to true. The statistics struct definition can be found in https://github.com/raspberrypi/libpisp/blob/main/src/libpisp/frontend/pisp_statistics.h.
|
extern |
An array of rectangles, where each singular value has identical functionality to the ScalerCrop control. This control allows the Raspberry Pi pipeline handler to control individual scaler crops per output stream.
The order of rectangles passed into the control must match the order of streams configured by the application. The pipeline handler will only configure crop retangles up-to the number of output streams configured. All subsequent rectangles passed into this control are ignored by the pipeline handler.
If both rpi::ScalerCrops and ScalerCrop controls are present in a ControlList, the latter is discarded, and crops are obtained from this control.
Note that using different crop rectangles for each output stream with this control is only applicable on the Pi5/PiSP platform. This control should also be considered temporary/draft and will be replaced with official libcamera API support for per-stream controls in the future.
|
extern |
Toggles the Raspberry Pi IPA to output the hardware generated statistics.
When this control is set to true, the IPA outputs a binary dump of the hardware generated statistics through the Request metadata in the Bcm2835StatsOutput control.
|
extern |
The number of frames the server should wait, after enabling SyncModeServer, before signalling (via the SyncReady control) that frames should be used. This therefore determines the "ready time" for all synchronised cameras.
This control value should be set only for the device that is to act as the server, before or at the same moment at which SyncModeServer is enabled.
|
extern |
Enable or disable camera synchronisation ("sync") mode.
When sync mode is enabled, a camera will synchronise frames temporally with other cameras, either attached to the same device or a different one. There should be one "server" device, which broadcasts timing information to one or more "clients". Communication is one-way, from server to clients only, and it is only clients that adjust their frame timings to match the server.
Sync mode requires all cameras to be running at (as far as possible) the same fixed framerate. Clients may continue to make adjustments to keep their cameras synchronised with the server for the duration of the session, though any updates after the initial ones should remain small.
|
extern |
When using the camera synchronisation algorithm, the server broadcasts timing information to the clients. This also includes the time (some number of frames in the future, called the "ready time") at which the server will signal its controlling application, using this control, to start using the image frames.
The client receives the "ready time" from the server, and will signal its application to start using the frames at this same moment.
While this control value is false, applications (on both client and server) should continue to wait, and not use the frames.
Once this value becomes true, it means that this is the first frame where the server and its clients have agreed that they will both be synchronised and that applications should begin consuming frames. Thereafter, this control will continue to signal the value true for the rest of the session.
|
extern |
This reports the amount of time, in microseconds, until the "ready time", at which the server and client will signal their controlling applications that the frames are now synchronised and should be used. The value may be refined slightly over time, becoming more precise as the "ready time" approaches.
Servers always report this value, whereas clients will omit this control until they have received a message from the server that enables them to calculate it.
Normally the value will start positive (the "ready time" is in the future), and decrease towards zero, before becoming negative (the "ready time" has elapsed). So there should be just one frame where the timer value is, or is very close to, zero - the one for which the SyncReady control becomes true. At this moment, the value indicates how closely synchronised the client believes it is with the server.
But note that if frames are being dropped, then the "near zero" valued frame, or indeed any other, could be skipped. In these cases the timer value allows an application to deduce that this has happened.