
Recherche avancée
Autres articles (51)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...)
Sur d’autres sites (5644)
-
Capture CMOS video with FPGA, encode and send over Ethernet
23 décembre 2015, par ya_urockI am planning a open source university project for my students based on Zynq Xilinx FPGA that will capture CMOS video, encode it into transport stream and send it over Ethernet to remote PC. Basically I want to design yet another IP camera. I have strong FPGA experience, but lack knowledge regarding encoding and transfering video data. Here is my plan :
-
Connect CMOS camera to FPGA, recieve video frames and save them to external DDR memory, verify using HDMI output to monitor. I have no problems with that.
-
I understand that I have to compress my video stream for example to H.264 format and put into transport stream. Here I have little knowledge and require some hints.
-
After I form transport stream I can send it over network using UDP packets. I have working hardware solution that reads data from FIFO and sends it to remote PC as UDP papckets.
-
And finally I plan to receive and play video using ffmpeg library.
ffplay udp://localhost:5678
My question is basically regarding 2 step. How do I convert pixel frames to transport stream ? My options are :
- Use commercial IP, like
Here I doubt that they are free to use and we don’t have much funds.
-
Use open cores like
- http://sourceforge.net/projects/hardh264/ - here core generates only h264 output, but how to encapsulate it into transport stream ?
- I have searched opencores.org but with no success on this topic
- Maybe somebody knows some good open source relevant FPGA projects ?
-
Develop harware encoder by myself using Vivado HLS (C Language). But here is the problem that I don’t know the algorithm. Maybe I could gig ffmpeg or Cisco openh264 library and find there a function that converts raw pixel frames to H.264 format and then puts it into transport stream ? Any help would be appriciated here also.
Also I am worried about format compatibility of stream I might generate inside FPGA and the one expected at host by ffplay utility. Any help, hints, links and books are appriciated !
-
-
How to create a Master Playlist for HLS using a Java wrapper library for FFMPEG
17 janvier 2019, par Abhishek NandgaonkarI am trying to find/use a Java Library that wraps around FFMPEG to create HLS compatible segments and metadata files along with a master metadata file.
Most existing libraries perform encoding just fine and allow extra functions for additional audio/video parameters. But I could not find a good way for using them to create a master metadata file and other resolution/bandwidth/bitrate based metadata + segments.
One way that I am planning to go about it is running the piece of code that performs HLS for me multiple times to suit the different bitrates and then programmatically look at the metadata files to compose the master metadata file. But that adds room for errors.
If you know of an existing library that can do it for me. Please share the details. It would really help out.
Libraries :
- https://github.com/bramp/ffmpeg-cli-wrapper
- https://github.com/hoary/JavaAV
- https://github.com/kokorin/Jaffree
- http://www.xuggle.com/
Documentation (see Section Master Playlist)
https://developer.apple.com/library/content/referencelibrary/GettingStarted/AboutHTTPLiveStreaming/about/about.htmlFFMPEG command line code that works for me
ffmpeg -loglevel debug -threads 4 -vsync 1 -i 'id.mp4' -vf yadif -g 29.97 -r 29.97 -b:v:0 5250k -c:v libx264 -rc:v vbr_hq -pix_fmt yuv420p -profile:v main -level 4.1 -rc-lookahead 32 -forced-idr 1 -b:v:1 4200k -c:v libx264 -rc:v vbr_hq -pix_fmt yuv420p -profile:v main -level 4.1 -rc-lookahead 32 -forced-idr 1 -b:v:1 3150k -c:v libx264 -rc:v vbr_hq -pix_fmt yuv420p -profile:v main -level 4.1 -rc-lookahead 32 -forced-idr 1 -b:a:0 256k -b:a:0 192k -b:a:0 128k -c:a aac -ar 48000 -map 0:v -map 0:a:0 -map 0:v -map 0:a:0 -map 0:v -map 0:a:0 -f hls -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" -master_pl_name master.m3u8 -t 300 -hls_time 10 -hls_init_time 4 -hls_list_size 0 -master_pl_publish_rate 10 -hls_flags delete_segments+discont_start+split_by_time "vs%v_manifest.m3u8"
Any help would be appreciated. Thanks in advance.
-
How to Synchronize Audio with Video Frames [Python]
19 septembre 2023, par РостиславI want to stream video from URL to a server via socket, which then restreams it to all clients in the room.


This code streams video frame by frame :


async def stream_video(room, url):
 cap = cv2.VideoCapture(url)
 fps = round(cap.get(cv2.CAP_PROP_FPS))

 while True:
 ret, frame = cap.read()
 if not ret: break
 _, img_bytes = cv2.imencode(".jpg", frame)
 img_base64 = base64.b64encode(img_bytes).decode('utf-8')
 img_data_url = f"data:image/jpeg;base64,{img_base64}"

 await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
 await asyncio.sleep(1/fps)
 
 cap.release()



And this is code for stream audio :


async def stream_audio(room, url):
 sample_size = 14000
 cmd_audio = [
 "ffmpeg",
 "-i", url,
 '-vn',
 '-f', 's16le',
 '-c:a', 'pcm_s16le',
 "-ac", "2",
 "-sample_rate","48000",
 '-ar','48000',
 "-acodec","libmp3lame",
 "pipe:1"
 ]
 proc_audio = await asyncio.create_subprocess_exec(
 *cmd_audio, stdout=subprocess.PIPE, stderr=False
 )

 while True:
 audio_data = await proc_audio.stdout.read(sample_size)
 if audio_data:
 await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})
 await asyncio.sleep(1)




But the problem is : how to synchronize them ? How many bytes need to be read every second from ffmpeg so that the audio matches the frames.


I tried to do this, but the problem with the number of chunks still remained :


while True:
 audio_data = await proc_audio.stdout.read(sample_size)
 if audio_data:
 await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})

 for i in range(fps):
 ret, frame = cap.read()
 if not ret: break
 _, img_bytes = cv2.imencode(".jpg", frame)
 img_base64 = base64.b64encode(img_bytes).decode('utf-8')
 img_data_url = f"data:image/jpeg;base64,{img_base64}"

 await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
 await asyncio.sleep(1/fps)



I also tried loading a chunk of audio into pydub, but it shows that the duration of my 14000 chunk is 0.07s, which is very small. And if you increase the number of chunks for reading to 192k (as the gpt chat says), then the audio will simply play very, very quickly.
The ideal number of chunks that I was able to achieve is approximately 14000, but the audio is still not synchronous.