
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (57)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (9243)
-
Python : Extracting device and lens information from video metadata
14 mai 2023, par cat_got_my_tongueI am interested in extracting device and lens information from videos. Specifically, make and model of the device and the focal length. I was able to do this successfully for still images using the
exifread
module and extract a whole bunch of very useful information :

image type : MPO
Image ImageDescription: Shot with DxO ONE
Image Make: DxO
Image Model: DxO ONE
Image Orientation: Horizontal (normal)
Image XResolution: 300
Image YResolution: 300
Image ResolutionUnit: Pixels/Inch
Image Software: V3.0.0 (2b448a1aee) APP:1.0
Image DateTime: 2022:04:05 14:53:45
Image YCbCrCoefficients: [299/1000, 587/1000, 57/500]
Image YCbCrPositioning: Centered
Image ExifOffset: 158
Thumbnail Compression: JPEG (old-style)
Thumbnail XResolution: 300
Thumbnail YResolution: 300
Thumbnail ResolutionUnit: Pixels/Inch
Thumbnail JPEGInterchangeFormat: 7156
Thumbnail JPEGInterchangeFormatLength: 24886
EXIF ExposureTime: 1/3
EXIF FNumber: 8
EXIF ExposureProgram: Aperture Priority
EXIF ISOSpeedRatings: 100
EXIF SensitivityType: ISO Speed
EXIF ISOSpeed: 100
EXIF ExifVersion: 0221
EXIF DateTimeOriginal: 2022:04:05 14:53:45
EXIF DateTimeDigitized: 2022:04:05 14:53:45
EXIF ComponentsConfiguration: CrCbY
EXIF CompressedBitsPerPixel: 3249571/608175
EXIF ExposureBiasValue: 0
EXIF MaxApertureValue: 212/125
EXIF SubjectDistance: 39/125
EXIF MeteringMode: MultiSpot
EXIF LightSource: Unknown
EXIF Flash: Flash did not fire
EXIF FocalLength: 1187/100
EXIF SubjectArea: [2703, 1802, 675, 450]
EXIF MakerNote: [68, 88, 79, 32, 79, 78, 69, 0, 12, 0, 0, 0, 21, 0, 3, 0, 5, 0, 2, 0, ... ]
EXIF SubSecTime: 046
EXIF SubSecTimeOriginal: 046
EXIF SubSecTimeDigitized: 046
EXIF FlashPixVersion: 0100
EXIF ColorSpace: sRGB
EXIF ExifImageWidth: 5406
EXIF ExifImageLength: 3604
Interoperability InteroperabilityIndex: R98
Interoperability InteroperabilityVersion: [48, 49, 48, 48]
EXIF InteroperabilityOffset: 596
EXIF FileSource: Digital Camera
EXIF ExposureMode: Auto Exposure
EXIF WhiteBalance: Auto
EXIF DigitalZoomRatio: 1
EXIF FocalLengthIn35mmFilm: 32
EXIF SceneCaptureType: Standard
EXIF ImageUniqueID: C01A1709306530020220405185345046
EXIF BodySerialNumber: C01A1709306530



Unfortunately, I have been unable to extract this kind of info from videos so far.


This is what I have tried so far, with the
ffmpeg
module :

import ffmpeg
from pprint import pprint

test_video = "my_video.mp4"
pprint(ffmpeg.probe(test_video)["streams"])



And the output I get contains a lot of info but nothing related to the device or lens, which is what I am looking for :


[{'avg_frame_rate': '30/1',
 'bit_rate': '1736871',
 'bits_per_raw_sample': '8',
 'chroma_location': 'left',
 'codec_long_name': 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10',
 'codec_name': 'h264',
 'codec_tag': '0x31637661',
 'codec_tag_string': 'avc1',
 'codec_time_base': '1/60',
 'codec_type': 'video',
 'coded_height': 1088,
 'coded_width': 1920,
 'display_aspect_ratio': '16:9',
 'disposition': {'attached_pic': 0,
 'clean_effects': 0,
 'comment': 0,
 'default': 1,
 'dub': 0,
 'forced': 0,
 'hearing_impaired': 0,
 'karaoke': 0,
 'lyrics': 0,
 'original': 0,
 'timed_thumbnails': 0,
 'visual_impaired': 0},
 'duration': '20.800000',
 'duration_ts': 624000,
 'has_b_frames': 0,
 'height': 1080,
 'index': 0,
 'is_avc': 'true',
 'level': 40,
 'nal_length_size': '4',
 'nb_frames': '624',
 'pix_fmt': 'yuv420p',
 'profile': 'Constrained Baseline',
 'r_frame_rate': '30/1',
 'refs': 1,
 'sample_aspect_ratio': '1:1',
 'start_pts': 0,
 'start_time': '0.000000',
 'tags': {'creation_time': '2021-05-08T13:23:20.000000Z',
 'encoder': 'AVC Coding',
 'handler_name': 'VideoHandler',
 'language': 'und'},
 'time_base': '1/30000',
 'width': 1920},
 {'avg_frame_rate': '0/0',
 'bit_rate': '79858',
 'bits_per_sample': 0,
 'channel_layout': 'stereo',
 'channels': 2,
 'codec_long_name': 'AAC (Advanced Audio Coding)',
 'codec_name': 'aac',
 'codec_tag': '0x6134706d',
 'codec_tag_string': 'mp4a',
 'codec_time_base': '1/48000',
 'codec_type': 'audio',
 'disposition': {'attached_pic': 0,
 'clean_effects': 0,
 'comment': 0,
 'default': 1,
 'dub': 0,
 'forced': 0,
 'hearing_impaired': 0,
 'karaoke': 0,
 'lyrics': 0,
 'original': 0,
 'timed_thumbnails': 0,
 'visual_impaired': 0},
 'duration': '20.864000',
 'duration_ts': 1001472,
 'index': 1,
 'max_bit_rate': '128000',
 'nb_frames': '978',
 'profile': 'LC',
 'r_frame_rate': '0/0',
 'sample_fmt': 'fltp',
 'sample_rate': '48000',
 'start_pts': 0,
 'start_time': '0.000000',
 'tags': {'creation_time': '2021-05-08T13:23:20.000000Z',
 'handler_name': 'SoundHandler',
 'language': 'und'},
 'time_base': '1/48000'}]



Are these pieces of info available for videos ? Should I be using a different package ?


Thanks.


Edit :


pprint(ffmpeg.probe(test_video)["format"])
gives

{'bit_rate': '1815244',
 'duration': '20.864000',
 'filename': 'my_video.mp4',
 'format_long_name': 'QuickTime / MOV',
 'format_name': 'mov,mp4,m4a,3gp,3g2,mj2',
 'nb_programs': 0,
 'nb_streams': 2,
 'probe_score': 100,
 'size': '4734158',
 'start_time': '0.000000',
 'tags': {'artist': 'Microsoft Game DVR',
 'compatible_brands': 'mp41isom',
 'creation_time': '2021-05-08T12:12:33.000000Z',
 'major_brand': 'mp42',
 'minor_version': '0',
 'title': 'Snipping Tool'}}



-
Converting python program with custom audio ffmpeg command to rust
4 octobre 2022, par rust_convertTo work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty, so I have been looking into the 'Pipe'-ing of data and cannot sort out what's wrong.


The python code snippet for video and audio conversion :


output=open(self.outputFile, 'wb')
devnull = open(os.devnull, 'wb')

vidcommand = [ FFMPEG_BIN,
 '-i', self.inputFile,
 '-f', 'image2pipe',
 '-r', '%d' % (self.outputFrameRate),
 '-vf', scaleCommand,
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr565be',
 '-f', 'rawvideo', '-']
 
vidPipe = '';
if os.name=='nt' :
 startupinfo = sp.STARTUPINFO()
 startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
 vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)
else:
 vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)

vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)

audioCommand = [ FFMPEG_BIN,
 '-i', self.inputFile,
 '-f', 's16le',
 '-acodec', 'pcm_s16le',
 '-ar', '%d' % (self.outputAudioSampleRate),
 '-ac', '1',
 '-']

audioPipe=''
if (self.audioEnable.get() == 1):
 if os.name=='nt' :
 startupinfo = sp.STARTUPINFO()
 startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
 audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)
 else:
 audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)

 audioFrame = audioPipe.stdout.read(self.audioFrameBytes) 

currentFrame=0;

while len(vidFrame)==self.inputVidFrameBytes:
 currentFrame+=1
 if(currentFrame%30==0):
 self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)
 if (self.videoBitDepth.get() == 16):
 output.write(vidFrame)
 else:
 b16VidFrame=bytearray(vidFrame)
 b8VidFrame=[]
 for p in range(self.outputVidFrameBytes):
 b8VidFrame.append(((b16VidFrame[(p*2)+0]>>0)&0xE0)|((b16VidFrame[(p*2)+0]<<2)&0x1C)|((b16VidFrame[(p*2)+1]>>3)&0x03))
 output.write(bytearray(b8VidFrame))

 vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?
 if (self.audioEnable.get() == 1):


 if len(audioFrame)==self.audioFrameBytes:
 audioData=bytearray(audioFrame) 

 for j in range(int(round(self.audioFrameBytes/2))):
 sample = ((audioData[(j*2)+1]<<8) | audioData[j*2]) + 0x8000
 sample = (sample>>(16-self.outputAudioSampleBitDepth)) & (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))

 audioData[j*2] = sample & 0xFF
 audioData[(j*2)+1] = sample>>8

 output.write(audioData)
 audioFrame = audioPipe.stdout.read(self.audioFrameBytes)

 else:
 emptySamples=[]
 for samples in range(int(round(self.audioFrameBytes/2))):
 emptySamples.append(0x00)
 emptySamples.append(0x00)
 output.write(bytearray(emptySamples))

self.progressBarVar.set(100.0)

vidPipe.terminate()
vidPipe.stdout.close()
vidPipe.wait()

if (self.audioEnable.get() == 1):
 audioPipe.terminate()
 audioPipe.stdout.close()
 audioPipe.wait()

output.close()



The Rust snippet that should accomplish the same goals :


let output_file = OpenOptions::new()
 .create(true)
 .truncate(true)
 .write(true)
 .open(&output_path)
 .unwrap();
let mut writer = BufWriter::with_capacity(
 options.video_frame_bytes.max(options.audio_frame_bytes),
 output_file,
);
let ffmpeg_path = sidecar_path("ffmpeg");
#[cfg(debug_assertions)]
let timer = Instant::now();

let mut video_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
video_cmd.args([
 "-i", options.path,
 "-f", "image2pipe",
 "-r", options.frame_rate,
 "-vf", options.scale,
 "-vcodec", "rawvideo",
 "-pix_fmt", "bgr565be",
 "-f", "rawvideo",
 "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window
// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags
#[cfg(windows)]
video_cmd.creation_flags(0x08000000);

let mut video_child = video_cmd.spawn().unwrap();
let mut video_stdout = video_child.stdout.take().unwrap();
let mut video_frame = vec![0; options.video_frame_bytes];

let mut audio_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
audio_cmd.args([
 "-i", options.path,
 "-f", "s16le",
 "-acodec", "pcm_s16le",
 "-ar", options.sample_rate,
 "-ac", "1",
 "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

#[cfg(windows)]
audio_cmd.creation_flags(0x08000000);

let mut audio_child = audio_cmd.spawn().unwrap();
let mut audio_stdout = audio_child.stdout.take().unwrap();
let mut audio_frame = vec![0; options.audio_frame_bytes];

while video_stdout.read_exact(&mut video_frame).is_ok() {
 writer.write_all(&video_frame).unwrap();

 if audio_stdout.read_to_end(&mut audio_frame).is_ok() {
 if audio_frame.len() == options.audio_frame_bytes {
 for i in 0..options.audio_frame_bytes / 2 {
 let temp_sample = ((u32::from(audio_frame[(i * 2) + 1]) << 8)
 | u32::from(audio_frame[i * 2]))
 + 0x8000;
 let sample = (temp_sample >> (16 - 10)) & (0x0000FFFF >> (16 - 10));

 audio_frame[i * 2] = (sample & 0xFF) as u8;
 audio_frame[(i * 2) + 1] = (sample >> 8) as u8;
 }
 } else {
 audio_frame.fill(0x00);
 }
 }
 writer.write_all(&audio_frame).unwrap();
}


video_child.wait().unwrap();
audio_child.wait().unwrap();

#[cfg(debug_assertions)]
{
 let elapsed = timer.elapsed();
 dbg!(elapsed);
}

writer.flush().unwrap();



I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so it's likely that the audio conversion isn't working at all and this shows only the video data. There is also a screenshot of the hex data from the working python program that converts the video and audio correctly.


HxD Python program hex output :




HxD Rust program hex output :




-
How do terminal pipes in Python differ from those in Rust ?
5 octobre 2022, par rust_convertTo work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly from the terminal pipe - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty. I have been researching the 'Pipe'-ing of data from the rust documentation and python documentation and am unsure how the Rust pipe could be empty or incorrect if it's working for the video.


From this python article and this rust stack overflow exchange, it looks like the python stdout pipe is equivalent to the rust stdin pipe ?


The python code snippet for video and audio conversion :


output=open(self.outputFile, 'wb')
devnull = open(os.devnull, 'wb')

vidcommand = [ FFMPEG_BIN,
 '-i', self.inputFile,
 '-f', 'image2pipe',
 '-r', '%d' % (self.outputFrameRate),
 '-vf', scaleCommand,
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr565be',
 '-f', 'rawvideo', '-']
 
vidPipe = '';
if os.name=='nt' :
 startupinfo = sp.STARTUPINFO()
 startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
 vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)
else:
 vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)

vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)

audioCommand = [ FFMPEG_BIN,
 '-i', self.inputFile,
 '-f', 's16le',
 '-acodec', 'pcm_s16le',
 '-ar', '%d' % (self.outputAudioSampleRate),
 '-ac', '1',
 '-']

audioPipe=''
if (self.audioEnable.get() == 1):
 if os.name=='nt' :
 startupinfo = sp.STARTUPINFO()
 startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
 audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)
 else:
 audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)

 audioFrame = audioPipe.stdout.read(self.audioFrameBytes) 

currentFrame=0;

while len(vidFrame)==self.inputVidFrameBytes:
 currentFrame+=1
 if(currentFrame%30==0):
 self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)
 if (self.videoBitDepth.get() == 16):
 output.write(vidFrame)
 else:
 b16VidFrame=bytearray(vidFrame)
 b8VidFrame=[]
 for p in range(self.outputVidFrameBytes):
 b8VidFrame.append(((b16VidFrame[(p*2)+0]>>0)&0xE0)|((b16VidFrame[(p*2)+0]<<2)&0x1C)|((b16VidFrame[(p*2)+1]>>3)&0x03))
 output.write(bytearray(b8VidFrame))

 vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?
 if (self.audioEnable.get() == 1):


 if len(audioFrame)==self.audioFrameBytes:
 audioData=bytearray(audioFrame) 

 for j in range(int(round(self.audioFrameBytes/2))):
 sample = ((audioData[(j*2)+1]<<8) | audioData[j*2]) + 0x8000
 sample = (sample>>(16-self.outputAudioSampleBitDepth)) & (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))

 audioData[j*2] = sample & 0xFF
 audioData[(j*2)+1] = sample>>8

 output.write(audioData)
 audioFrame = audioPipe.stdout.read(self.audioFrameBytes)

 else:
 emptySamples=[]
 for samples in range(int(round(self.audioFrameBytes/2))):
 emptySamples.append(0x00)
 emptySamples.append(0x00)
 output.write(bytearray(emptySamples))

self.progressBarVar.set(100.0)

vidPipe.terminate()
vidPipe.stdout.close()
vidPipe.wait()

if (self.audioEnable.get() == 1):
 audioPipe.terminate()
 audioPipe.stdout.close()
 audioPipe.wait()

output.close()



The Rust snippet that should accomplish the same goals :


let output_file = OpenOptions::new()
 .create(true)
 .truncate(true)
 .write(true)
 .open(&output_path)
 .unwrap();
let mut writer = BufWriter::with_capacity(
 options.video_frame_bytes.max(options.audio_frame_bytes),
 output_file,
);
let ffmpeg_path = sidecar_path("ffmpeg");
#[cfg(debug_assertions)]
let timer = Instant::now();

let mut video_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
video_cmd.args([
 "-i", options.path,
 "-f", "image2pipe",
 "-r", options.frame_rate,
 "-vf", options.scale,
 "-vcodec", "rawvideo",
 "-pix_fmt", "bgr565be",
 "-f", "rawvideo",
 "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window
// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags
#[cfg(windows)]
video_cmd.creation_flags(0x08000000);

let mut video_child = video_cmd.spawn().unwrap();
let mut video_stdout = video_child.stdout.take().unwrap();
let mut video_frame = vec![0; options.video_frame_bytes];

let mut audio_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
audio_cmd.args([
 "-i", options.path,
 "-f", "s16le",
 "-acodec", "pcm_s16le",
 "-ar", options.sample_rate,
 "-ac", "1",
 "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

#[cfg(windows)]
audio_cmd.creation_flags(0x08000000);

let mut audio_child = audio_cmd.spawn().unwrap();
let mut audio_stdout = audio_child.stdout.take().unwrap();
let mut audio_frame = vec![0; options.audio_frame_bytes];

while video_stdout.read_exact(&mut video_frame).is_ok() {
 writer.write_all(&video_frame).unwrap();

 if audio_stdout.read_to_end(&mut audio_frame).is_ok() {
 if audio_frame.len() == options.audio_frame_bytes {
 for i in 0..options.audio_frame_bytes / 2 {
 let temp_sample = ((u32::from(audio_frame[(i * 2) + 1]) << 8)
 | u32::from(audio_frame[i * 2]))
 + 0x8000;
 let sample = (temp_sample >> (16 - 10)) & (0x0000FFFF >> (16 - 10));

 audio_frame[i * 2] = (sample & 0xFF) as u8;
 audio_frame[(i * 2) + 1] = (sample >> 8) as u8;
 }
 } else {
 audio_frame.fill(0x00);
 }
 }
 writer.write_all(&audio_frame).unwrap();
}


video_child.wait().unwrap();
audio_child.wait().unwrap();

#[cfg(debug_assertions)]
{
 let elapsed = timer.elapsed();
 dbg!(elapsed);
}

writer.flush().unwrap();



I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so the audio pipe is incorrectly interfaced. I included a screenshot of the hex data from the working python program that converts the video and audio correctly.


HxD Python program hex output :




HxD Rust program hex output :