
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (79)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (10005)
-
FFmpeg : unspecified pixel format when opening video with custom context
14 février 2021, par PedroI am trying to decode a video with a custom context. The purpose is that I want to decode the video directly from memory. In the following code, I am reading from file in the
read
function passed toavio_alloc_context
- but this is just for testing purposes.


I think I've read any post there is on Stackoverflow or on any other website related to this topic. At least I definitely tried my best to do so. While there is much in common, the details differ : people set different flags, some say
av_probe_input_format
is required, some say it isn't, etc. And for some reason nothing works for me.


My problem is that the pixel format is unspecified (see output below), which is why I run into problems later when calling
sws_getContext
. I checkedpFormatContext->streams[videoStreamIndex]->codec->pix_fmt
, and it is -1.


Please note my comments
// things I tried
and// seems not to help
in the code. I think, the answer might be hidden somehwere there. I tried many combinations of hints that I've read so far, but I am missing a detail I guess.


The problem is not the video file, because when I go the standard way and just call
avformat_open_input(&pFormatContext, pFilePath, NULL, NULL)
without a custom context, everything runs fine.


The code compiles and runs as is.



#include <libavformat></libavformat>avformat.h>
#include 
#include 

FILE *f;

static int read(void *opaque, uint8_t *buf, int buf_size) {
 if (feof(f)) return -1;
 return fread(buf, 1, buf_size, f);
}

int openVideo(const char *pFilePath) {
 const int bufferSize = 32768;
 int ret;

 av_register_all();

 f = fopen(pFilePath, "rb");
 uint8_t *pBuffer = (uint8_t *) av_malloc(bufferSize + AVPROBE_PADDING_SIZE);
 AVIOContext *pAVIOContext = avio_alloc_context(pBuffer, bufferSize, 0, NULL,
 &read, NULL, NULL);

 if (!f || !pBuffer || !pAVIOContext) {
 printf("error: open / alloc failed\n");
 // cleanup...
 return 1;
 }

 AVFormatContext *pFormatContext = avformat_alloc_context();
 pFormatContext->pb = pAVIOContext;

 const int readBytes = read(NULL, pBuffer, bufferSize);

 printf("readBytes = %i\n", readBytes);

 if (readBytes <= 0) {
 printf("error: read failed\n");
 // cleanup...
 return 2;
 }

 if (fseek(f, 0, SEEK_SET) != 0) {
 printf("error: fseek failed\n");
 // cleanup...
 return 3;
 }

 // required for av_probe_input_format
 memset(pBuffer + readBytes, 0, AVPROBE_PADDING_SIZE);

 AVProbeData probeData;
 probeData.buf = pBuffer;
 probeData.buf_size = readBytes;
 probeData.filename = "";
 probeData.mime_type = NULL;

 pFormatContext->iformat = av_probe_input_format(&probeData, 1);

 // things I tried:
 //pFormatContext->flags = AVFMT_FLAG_CUSTOM_IO;
 //pFormatContext->iformat->flags |= AVFMT_NOFILE;
 //pFormatContext->iformat->read_header = NULL;

 // seems not to help (therefore commented out here):
 AVDictionary *pDictionary = NULL;
 //av_dict_set(&pDictionary, "analyzeduration", "8000000", 0);
 //av_dict_set(&pDictionary, "probesize", "8000000", 0);

 if ((ret = avformat_open_input(&pFormatContext, "", NULL, &pDictionary)) < 0) {
 char buffer[4096];
 av_strerror(ret, buffer, sizeof(buffer));
 printf("error: avformat_open_input failed: %s\n", buffer);
 // cleanup...
 return 4;
 }

 printf("retrieving stream information...\n");

 if ((ret = avformat_find_stream_info(pFormatContext, NULL)) < 0) {
 char buffer[4096];
 av_strerror(ret, buffer, sizeof(buffer));
 printf("error: avformat_find_stream_info failed: %s\n", buffer);
 // cleanup...
 return 5;
 }

 printf("nb_streams = %i\n", pFormatContext->nb_streams);

 // further code...

 // cleanup...
 return 0;
}

int main() {
 openVideo("video.mp4");
 return 0;
}




This is the output that I get :

readBytes = 32768

retrieving stream information...

[mov,mp4,m4a,3gp,3g2,mj2 @ 0xdf8d20] stream 0, offset 0x30 : partial file
[mov,mp4,m4a,3gp,3g2,mj2 @ 0xdf8d20] Could not find codec parameters for stream 0 (Video : h264 (avc1 / 0x31637661), none, 640x360, 351 kb/s) : unspecified pixel format

Consider increasing the value for the 'analyzeduration' and 'probesize' options

nb_streams = 2


UPDATE :

Thanks to WLGfx, here is the solution : The only thing that was missing was the seek function. Apparently, implementing it is mandatory for decoding. It is important to return the new offset - and not 0 in case of success (some solutions found in the web just return the return value of fseek, and that is wrong). Here is the minimal solution that made it work :


static int64_t seek(void *opaque, int64_t offset, int whence) {
 if (whence == SEEK_SET && fseek(f, offset, SEEK_SET) == 0) {
 return offset;
 }
 // handling AVSEEK_SIZE doesn't seem mandatory
 return -1;
}




Of course, the call to
avio_alloc_context
needs to be adapted accordingly :


AVIOContext *pAVIOContext = avio_alloc_context(pBuffer, bufferSize, 0, NULL,
 &read, NULL, &seek);



-
Rust Win32 FFI : User-mode data execution prevention (DEP) violation
28 avril 2022, par TheElixI'm trying to pass a ID3D11Device instance from Rust to a C FFI Library (FFMPEG).


I made this sample code :


pub fn create_d3d11_device(&mut self, device: &mut Box, context: &mut Box) {
 let av_device : Box<avbufferref> = self.alloc(HwDeviceType::D3d11va);
 unsafe {
 let device_context = Box::from_raw(av_device.data as *mut AVHWDeviceContext);
 let mut d3d11_device_context = Box::from_raw(device_context.hwctx as *mut AVD3D11VADeviceContext);
 d3d11_device_context.device = device.as_mut() as *mut _;
 d3d11_device_context.device_context = context.as_mut() as *mut _;
 let avp = Box::into_raw(av_device);
 av_hwdevice_ctx_init(avp);
 self.av_hwdevice = Some(Box::from_raw(avp));
 }
 }
</avbufferref>


On the Rust side the Device does work, but on the C side, when FFMEPG calls
ID3D11DeviceContext_QueryInterface
the app crashes with the following error :Exception 0xc0000005 encountered at address 0x7ff9fb99ad38: User-mode data execution prevention (DEP) violation at location 0x7ff9fb99ad38


The address is actually the pointer for the lpVtbl of QueryInterface, like seen here :


The disassembly of the address also looks correct (this is done on an another debugging session) :


(lldb) disassemble --start-address 0x00007ffffdf3ad38
 0x7ffffdf3ad38: addb %ah, 0x7ffffd(%rdi,%riz,8)
 0x7ffffdf3ad3f: addb %al, (%rax)
 0x7ffffdf3ad41: movabsl -0x591fffff80000219, %eax
 0x7ffffdf3ad4a: outl %eax, $0xfd



Do you have any pointer to debug this further ?


EDIT : I made a Minimal Reproducion Sample. Interestingly this does not causes a DEP Violation, but simply a Segfault.


On the C side :


int test_ffi(ID3D11Device *device){
 ID3D11DeviceContext *context;
 device->lpVtbl->GetImmediateContext(device, &context);
 if (!context) return 1;
 return 0;
}



On the Rust side :


unsafe fn main_rust(){
 let mut device = None;
 let mut device_context = None;
 let _ = match windows::Win32::Graphics::Direct3D11::D3D11CreateDevice(None, D3D_DRIVER_TYPE_HARDWARE, OtherHinstance::default(), D3D11_CREATE_DEVICE_DEBUG, &[], D3D11_SDK_VERSION, &mut device, std::ptr::null_mut(), &mut device_context) {
 Ok(e) => e,
 Err(e) => panic!("Creation Failed: {}", e)
 };
 let mut device = match device {
 Some(e) => e,
 None => panic!("Creation Failed2")
 };
 let mut f2 : ID3D11Device = transmute_copy(&device); //Transmuting the WinAPI into a bindgen ID3D11Device
 test_ffi(&mut f2);
}



The bindgen build.rs :


extern crate bindgen;

use std::env;
use std::path::PathBuf;

fn main() {
 // Tell cargo to tell rustc to link the system bzip2
 // shared library.
 println!("cargo:rustc-link-lib=ffi_demoLIB");
 println!("cargo:rustc-link-lib=d3d11");

 // Tell cargo to invalidate the built crate whenever the wrapper changes
 println!("cargo:rerun-if-changed=library.h");

 // The bindgen::Builder is the main entry point
 // to bindgen, and lets you build up options for
 // the resulting bindings.
 let bindings = bindgen::Builder::default()
 // The input header we would like to generate
 // bindings for.
 .header("library.h")
 // Tell cargo to invalidate the built crate whenever any of the
 // included header files changed.
 .parse_callbacks(Box::new(bindgen::CargoCallbacks))
 .blacklist_type("_IMAGE_TLS_DIRECTORY64")
 .blacklist_type("IMAGE_TLS_DIRECTORY64")
 .blacklist_type("PIMAGE_TLS_DIRECTORY64")
 .blacklist_type("IMAGE_TLS_DIRECTORY")
 .blacklist_type("PIMAGE_TLS_DIRECTORY")
 // Finish the builder and generate the bindings.
 .generate()
 // Unwrap the Result and panic on failure.
 .expect("Unable to generate bindings");

 // Write the bindings to the $OUT_DIR/bindings.rs file.
 let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
 bindings
 .write_to_file(out_path.join("bindings.rs"))
 .expect("Couldn't write bindings!");
}



The Complete Repo can be found over here : https://github.com/TheElixZammuto/demo-ffi


-
Frames took with ELP camera has unknown pixel format at FHD ?
11 novembre 2024, par Marcel KoperaI'm trying to take a one frame ever x seconds from my usb camera. Name of the camera is : ELP-USBFHD06H-SFV(5-50).
Code is not 100% done yet, but I'm using it this way right now ↓ (
shot
fn is called frommain.py
in a loop)


import cv2
import subprocess

from time import sleep
from collections import namedtuple

from errors import *

class Camera:
 def __init__(self, cam_index, res_width, res_height, pic_format, day_time_exposure_ms, night_time_exposure_ms):
 Resolution = namedtuple("resolution", ["width", "height"])
 self.manual_mode(True)

 self.cam_index = cam_index
 self.camera_resolution = Resolution(res_width, res_height)
 self.picture_format = pic_format
 self.day_time_exposure_ms = day_time_exposure_ms
 self.night_time_exposure_ms = night_time_exposure_ms

 self.started: bool = False
 self.night_mode = False

 self.cap = cv2.VideoCapture(self.cam_index, cv2.CAP_V4L2)
 self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.camera_resolution.width)
 self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.camera_resolution.height)
 self.cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*self.picture_format))

 

 def start(self):
 sleep(1)
 if not self.cap.isOpened():
 return CameraCupError()

 self.set_exposure_time(self.day_time_exposure_ms)
 self.set_brightness(0)
 sleep(0.1)
 
 self.started = True



 def shot(self, picture_name, is_night):
 if not self.started:
 return InitializationError()

 self.configure_mode(is_night)

 # Clear buffer
 for _ in range(5):
 ret, _ = self.cap.read()

 ret, frame = self.cap.read()

 sleep(0.1)

 if ret:
 print(picture_name)
 cv2.imwrite(picture_name, frame)
 return True

 else:
 print("No photo")
 return False


 
 def release(self):
 self.set_exposure_time(156)
 self.set_brightness(0)
 self.manual_mode(False)
 self.cap.release()



 def manual_mode(self, switch: bool):
 if switch:
 subprocess.run(["v4l2-ctl", "--set-ctrl=auto_exposure=1"])
 else:
 subprocess.run(["v4l2-ctl", "--set-ctrl=auto_exposure=3"])
 sleep(1)

 
 
 def configure_mode(self, is_night):
 if is_night == self.night_mode:
 return

 if is_night:
 self.night_mode = is_night
 self.set_exposure_time(self.night_time_exposure_ms)
 self.set_brightness(64)
 else:
 self.night_mode = is_night
 self.set_exposure_time(self.day_time_exposure_ms)
 self.set_brightness(0)
 sleep(0.1)



 def set_exposure_time(self, ms: int):
 ms = int(ms)
 default_val = 156

 if ms < 1 or ms > 5000:
 ms = default_val

 self.cap.set(cv2.CAP_PROP_EXPOSURE, ms)



 def set_brightness(self, value: int):
 value = int(value)
 default_val = 0

 if value < -64 or value > 64:
 value = default_val

 self.cap.set(cv2.CAP_PROP_BRIGHTNESS, value)



Here are settings for the camera (yaml file)


camera:
 camera_index: 0
 res_width: 1920
 res_height: 1080
 picture_format: "MJPG"
 day_time_exposure_ms: 5
 night_time_exposure_ms: 5000
 photos_format: "jpg"




I do some configs like set manual mode for the camera, change exposure/brightness and saving frame.
Also the camera is probably catching the frames to the buffer (it is not saving latest frame in real time : it's more laggish), so I have to clear buffer every time. like this


# Clear buffer from old frames
 for _ in range(5):
 ret, _ = self.cap.read()
 
 # Get a new frame
 ret, frame = self.cap.read()



What I really don't like, but I could find a better way (tldr : setting buffer for 1 frame doesn't work on my camera).


Frames saved this method looks good with 1920x1080 resolution. BUT when I try to run
ffmpeg
command to make a timelapse from savedjpg
file like this

ffmpeg -framerate 20 -pattern_type glob -i "*.jpg" -c:v libx264 output.mp4



I got an error like this one


[image2 @ 0x555609c45240] Could not open file : 08:59:20.jpg
[image2 @ 0x555609c45240] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from '*.jpg':
 Duration: 00:00:00.05, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: mjpeg, none(bt470bg/unknown/unknown), 20 fps, 20 tbr, 20 tbn
Output #0, mp4, to 'output.mp4':
Output file #0 does not contain any stream



Also when I try to copy the files from Linux to Windows I get some weird copy failing error and option to skip the picture. But even when I press the skip button, the picture is copied and can be opened. I'm not sure what is wrong with the format, but the camera is supporting MPEG at 1920x1080.


>>> v4l2-ctl --all

Driver Info:
 Driver name : uvcvideo
 Card type : H264 USB Camera: USB Camera
 Bus info : usb-xhci-hcd.1-1
 Driver version : 6.6.51
 Capabilities : 0x84a00001
 Video Capture
 Metadata Capture
 Streaming
 Extended Pix Format
 Device Capabilities
 Device Caps : 0x04200001
 Video Capture
 Streaming
 Extended Pix Format
Media Driver Info:
 Driver name : uvcvideo
 Model : H264 USB Camera: USB Camera
 Serial : 2020032801
 Bus info : usb-xhci-hcd.1-1
 Media version : 6.6.51
 Hardware revision: 0x00000100 (256)
 Driver version : 6.6.51
Interface Info:
 ID : 0x03000002
 Type : V4L Video
Entity Info:
 ID : 0x00000001 (1)
 Name : H264 USB Camera: USB Camera
 Function : V4L2 I/O
 Flags : default
 Pad 0x0100000d : 0: Sink
 Link 0x0200001a: from remote pad 0x1000010 of entity 'Extension 4' (Video Pixel Formatter): Data, Enabled, Immutable
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
 Width/Height : 1920/1080
 Pixel Format : 'MJPG' (Motion-JPEG)
 Field : None
 Bytes per Line : 0
 Size Image : 4147789
 Colorspace : sRGB
 Transfer Function : Default (maps to sRGB)
 YCbCr/HSV Encoding: Default (maps to ITU-R 601)
 Quantization : Default (maps to Full Range)
 Flags :
Crop Capability Video Capture:
 Bounds : Left 0, Top 0, Width 1920, Height 1080
 Default : Left 0, Top 0, Width 1920, Height 1080
 Pixel Aspect: 1/1
Selection Video Capture: crop_default, Left 0, Top 0, Width 1920, Height 1080, Flags:
Selection Video Capture: crop_bounds, Left 0, Top 0, Width 1920, Height 1080, Flags:
Streaming Parameters Video Capture:
 Capabilities : timeperframe
 Frames per second: 15.000 (15/1)
 Read buffers : 0

User Controls

 brightness 0x00980900 (int) : min=-64 max=64 step=1 default=0 value=64
 contrast 0x00980901 (int) : min=0 max=64 step=1 default=32 value=32
 saturation 0x00980902 (int) : min=0 max=128 step=1 default=56 value=56
 hue 0x00980903 (int) : min=-40 max=40 step=1 default=0 value=0
 white_balance_automatic 0x0098090c (bool) : default=1 value=1
 gamma 0x00980910 (int) : min=72 max=500 step=1 default=100 value=100
 gain 0x00980913 (int) : min=0 max=100 step=1 default=0 value=0
 power_line_frequency 0x00980918 (menu) : min=0 max=2 default=1 value=1 (50 Hz)
 0: Disabled
 1: 50 Hz
 2: 60 Hz
 white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive
 sharpness 0x0098091b (int) : min=0 max=6 step=1 default=3 value=3
 backlight_compensation 0x0098091c (int) : min=0 max=2 step=1 default=1 value=1

Camera Controls

 auto_exposure 0x009a0901 (menu) : min=0 max=3 default=3 value=1 (Manual Mode)
 1: Manual Mode
 3: Aperture Priority Mode
 exposure_time_absolute 0x009a0902 (int) : min=1 max=5000 step=1 default=156 value=5000
 exposure_dynamic_framerate 0x009a0903 (bool) : default=0 value=0



I also tried to save the picture using
ffmpeg
in a case something is not right withopencv
like this :

ffmpeg -f v4l2 -framerate 30 -video_size 1920x1080 -i /dev/video0 -c:v libx264 -preset fast -crf 23 -t 00:01:00 output.mp4




It is saving the picture but also changing its format


[video4linux2,v4l2 @ 0x555659ed92b0] The V4L2 driver changed the video from 1920x1080 to 800x600
[video4linux2,v4l2 @ 0x555659ed92b0] The driver changed the time per frame from 1/30 to 1/15



But the format looks right when set it back to FHD using
v4l2



>>> v4l2-ctl --device=/dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=MJPG
>>> v4l2-ctl --get-fmt-video

Format Video Capture:
 Width/Height : 1920/1080
 Pixel Format : 'MJPG' (Motion-JPEG)
 Field : None
 Bytes per Line : 0
 Size Image : 4147789
 Colorspace : sRGB
 Transfer Function : Default (maps to sRGB)
 YCbCr/HSV Encoding: Default (maps to ITU-R 601)
 Quantization : Default (maps to Full Range)
 Flags :



I'm not sure what could be wrong with the format/camera and I don't think I have enough information to figure it out.


I tried to use
ffmpeg
instead ofopencv
and also change a few settings inopencv's cup
config.