
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (30)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6425)
-
Capture ffmpeg's metadata output in powershell
17 janvier 2021, par xdhmooreI'm trying to capture the output of
ffmpeg
in PowerShell(tm) to get some metadata on some ogg & mp3 files. But when I do :


ffmpeg -i file.ogg 2>&1 | sls GENRE




The output includes a bunch of lines without my matching string, "GENRE" :



album_artist : Post Human Era
 ARTIST : Post Human Era
 COMMENT : Visit http://posthumanera.bandcamp.com
 DATE : 2013
 GENRE : Music
 TITLE : Supplies
 track : 1
At least one output file must be specified




I am guessing something is different in the encoding. ffmpeg's output is colored, so maybe there are color control characters in the output that are breaking things ? Or, maybe ffmpeg's output isn't playing nicely with powershell's default UTF-16 ? I can't figure out if there is another way to redirect stderr and remove the color characters or change the encoding of stderr.



EDIT :
Strangely, I also get indeterminate output. Sometimes the output is as shown above. Sometimes with precisely the same command the output is :



GENRE :







Which makes slightly more sense, but is still missing the part of the line I care about ('Music').



Somewhere powershell is interpreting something as newlines that is not newlines.


-
ffmpeg errors in the daemon
3 octobre 2020, par smoto_sheiI created a shell script to compress a video using ffmpeg(4.3.1).


ffmpeg -y -i \
 '/var/www/System/Backend/Outputs/TempSaveMovie/200703_4_short_5fr_p2(100_20)_r(50_20).mp4' \
 -vcodec h264 -an \
 '/var/www/System/Backend/Outputs/MovieOutputs/200703_4_short_5fr_p2(100_20)_r(50_20).mp4'




If you run this code from the console, it will run without problems.
In fact, we're using the python
subscript.call()
to execute it. It works fine too.

cmd = 'sh /var/www/System/Backend/cv2toffmpeg.sh'
subprocess.call(cmd, shell=True)



Secondly, if I run it from a daemonized python program, I'll get an error. I get the following error.
You'll get an error like this


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './Outputs/TempSaveMovie/200703_4_short_5fr_p2(100_20)_r(50_20).mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 encoder : Lavf58.35.100
 Duration: 00:00:06.15, start: 0.000000, bitrate: 10246 kb/s
 Stream #0:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 10244 kb/s, 13 fps, 13 tbr, 13312 tbn, 13 tbc (default)
 Metadata:
 handler_name : VideoHandler
Stream mapping:
 Stream #0:0 -> #0:0 (mpeg4 (native) -> h264 (h264_nvenc))
Press [q] to stop, [?] for help
[mpeg4 @ 0x55cec17c5480] header damaged
[mpeg4 @ 0x55cec17c6840] header damaged
[mpeg4 @ 0x55cec1855f80] header damaged
[mpeg4 @ 0x55cec1866e00] header damaged
Output #0, mp4, to './Outputs/MovieOutputs/200703_4_short_5fr_p2(100_20)_r(50_20).mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 encoder : Lavf58.45.100
 Stream #0:0(und): Video: h264 (h264_nvenc) (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 13 fps, 13312 tbn, 13 tbc (default)
 Metadata:
 handler_name : VideoHandler
 encoder : Lavc58.91.100 h264_nvenc
 Side data:
 cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
Error while decoding stream #0:0: Invalid data found when processing input
[mpeg4 @ 0x55cec17c8780] header damaged
Error while decoding stream #0:0: Invalid data found when processing input
[mpeg4 @ 0x55cec17c5480] header damaged



I think the problem is when you run it from a daemonized process. There seems to be a similar problem in the past.
Ffmpeg does not properly convert videos when run as daemon
I would like to ask for your help to solve this problem. Thank you for your help from Japan.


-
How would I assign multiple MMAP's from single file descriptor ?
9 juin 2011, par Alex StevensSo, for my final year project, I'm using Video4Linux2 to pull YUV420 images from a camera, parse them through to x264 (which uses these images natively), and then send the encoded stream via Live555 to an RTP/RTCP compliant video player on a client over a wireless network. All of this I'm trying to do in real-time, so there'll be a control algorithm, but that's not the scope of this question. All of this - except Live555 - is being written in C. Currently, I'm near the end of encoding the video, but want to improve performance.
To say the least, I've hit a snag... I'm trying to avoid User Space Pointers for V4L2 and use mmap(). I'm encoding video, but since it's YUV420, I've been malloc'ing new memory to hold the Y', U and V planes in three different variables for x264 to read upon. I would like to keep these variables as pointers to an mmap'ed piece of memory.
However, the V4L2 device has one single file descriptor for the buffered stream, and I need to split the stream into three mmap'ed variables adhering to the YUV420 standard, like so...
buffers[n_buffers].y_plane = mmap(NULL, (2 * width * height) / 3,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd, buf.m.offset);
buffers[n_buffers].u_plane = mmap(NULL, width * height / 6,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd, buf.m.offset +
((2 * width * height) / 3 + 1) /
sysconf(_SC_PAGE_SIZE));
buffers[n_buffers].v_plane = mmap(NULL, width * height / 6,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd, buf.m.offset +
((2 * width * height) / 3 +
width * height / 6 + 1) /
sysconf(_SC_PAGE_SIZE));Where "width" and "height" is the resolution of the video (eg. 640x480).
From what I understand... MMAP seeks through a file, kind of like this (pseudoish-code) :
fd = v4l2_open(...);
lseek(fd, buf.m.offset + (2 * width * height) / 3);
read(fd, buffers[n_buffers].u_plane, width * height / 6);My code is located in a Launchpad Repo here (for more background) :
http://bazaar.launchpad.net/ alex-stevens/+junk/spyPanda/files (Revision 11)And the YUV420 format can be seen clearly from this Wiki illustration : http://en.wikipedia.org/wiki/File:Yuv420.svg (I essentially want to split up the Y, U, and V bytes into each mmap'ed memory)
Anyone care to explain a way to mmap three variables to memory from the one file descriptor, or why I went wrong ? Or even hint at a better idea to parse the YUV420 buffer to x264 ? :P
Cheers ! ^^