
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (46)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (7526)
-
Elastic Transcoder output duration doesnt match with the sum of my Input duration
28 mars 2019, par auriclessI have multiple media file to concatenate into a single video file. Composed of different media types including video, audio and image. I use FFMPEG to convert audio and images to a video then finally, will use Elastic Transcoder to stitch/concatenate the video files in a single one. On creating transcoder job, whenever I placed the input video which is originally an image, converted by FFMPEG, to be the last input in order, it shrinks the duration of its exposure in the final output by 5 seconds whenever its original duration is > 5. This happens only with that condition.
Example :
(1) video1 - 10s
(2) image1 - 10s
(3) video2 - 15s
(4) image2 - 20s
output : video - 40s
(image2’s duration or exposure in the output shrinks to approx. 5s)Clearly, the sum of Input duration and the Output duration does not match. It is even explicitly stated on the Job result of elastic transcoder.
Thought I had a wrong conversion settings in FFMPEG so I changed some options. After some changes and comparing the image converted to video (V1) with an authentic video to stitch with (V2), their settings are almost the same. In this I use
ffmpeg -i myVideo.mp4
to check its details. They differ only on SAR, DAR, tbr and tbn and I dont really know what are their use.Already checked the duration of the converted images after ffmpeg conversion and it is accurate, it only messed up after feeding it to the elastic transcoder and placed as the last input.
Here is my conversion command with FFMPEG(image to video) :
ffmpeg -r 29.97 -i [input.jpg] -f lavfi -i anullsrc=r=48000:cl:stereo -t [duration] -acodec aac -vcodec libx264 -profile:v baseline -pix_fmt yuv420p -t [duration] -vf scale=854:480 -strict -2 [output.mp4]
The expected result should be that the output file is consistent with the actual duration it has.
[EDIT]
Here’s real videos I feed on Elastic Transcoder using
ffprobe filename
:Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'clip2.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.71.100
Duration: 00:00:10.05, start: 0.042667, bitrate: 476 kb/s
Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream #0:1(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 854x480 [SAR 2136:2135 DAR 89:50], 341 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : VideoHandler
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'image2.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.12.100
Duration: 00:02:10.03, start: 0.033333, bitrate: 130 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 854x480 [SAR 1943:1004 DAR 829661:240960], 2636 kb/s, SAR 283440:146461 DAR 1181:343, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler -
How to pipe multiple images, being created in parallel with an index, to ffmpeg so that it can match the speed of image creation ?
23 septembre 2020, par vishwas.mittalWe've a system that spews out 4-channel
png
images frame-by-frame (we control the output format of these images as well, so we can use something else as long as it supports transparency). Right now, we're waiting for all the images and then encoding them withffmpeg
into awebm
video file withvp8
(libvpx
encoder). But we now want to pipeline these images to FFmpeg to encode into the WebM video simultaneously as the images are being spewed out so that we don't wait forffmpeg
to encode all the images afterwards.

This is the current command, in python syntax :


['/usr/bin/ffmpeg', '-hide_banner', '-y', '-loglevel', 'info', '-f', 'rawvideo', '-pix_fmt', 'bgra', '-s', '1573x900', '-framerate', '30', '-i', '-', '-i', 'audio.wav', '-c:v', 'libvpx', '-b:v', '0', '-crf', '30', '-tile-columns', '2', '-quality', 'good', '-speed', '4', '-threads', '16', '-auto-alt-ref', '0', '-g', '300000', '-map', '0:v:0', '-map', '1:a:0', '-shortest', 'video.webm']
# for ease of read:
# /usr/bin/ffmpeg -hide_banner -y -loglevel info -f rawvideo -pix_fmt bgra -s 1573x900 -framerate 30 -i - -i audio.wav -c:v libvpx -b:v 0 -crf 30 -tile-columns 2 -quality good -speed 4 -threads 16 -auto-alt-ref 0 -g 300000 -map 0:v:0 -map 1:a:0 -shortest video.webm

proc = subprocess.Popen(args, stdin=subprocess.PIPE)



Here is a sample example of passing the image to FFMPEG proc stdin as :


# wait for the next frame to get ready
for frame_path in frame_path_list:
 while not os.path.exists(frame_path):
 time.sleep(0.25)
 frame = cv2.imread(frame_path, cv2.IMREAD_UNCHANGED)
 
 # put the frame in stdin so that it gets ready
 proc.stdin.write(frame.astype(np.uint8).tobytes())



The current speed of this process is 0.135x which is a huge bottleneck for us. Earlier when we were taking input as
-pattern_type glob -i images/*.png
we were getting around 1x-1.2x for this on a single core. So, our conclusion is that we're getting bottlenecked by stdin and hence are looking for ways to pass input through multiple sources or somehow helpffmpeg
to parallelize this effort - a few options that we're thinking of :

- 

- Somehow feed it to different pipes and make ffmpeg read from them.
- Append a new image to ffmpeg without re-encoding the whole video, but we didn't find a way to do this with giving input images directly.






But we haven't been able to get either of these working, open to any other solutions as well. Will really appreciate the help on this. Thanks !


-
First input link main timebase do not match the corresponding second input link xfade timebase [duplicate]
25 mars 2021, par Captain_ZarakiI am trying to concat two videos while adding transition effect. It is working fine on some videos but giving error on some.


the command i am using is -


ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0][1]xfade=transition=smoothup:duration=1:offset=1,format=yuv420p" output.mp4 



the error i am getting is -


[swscaler @ 0x558596a78800] deprecated pixel format used, make sure you did set range correctly

[Parsed_xfade_0 @ 0x558596a39ac0] First input link main timebase (1/12800) do not match the corresponding second input link xfade timebase (1/24000)

[Parsed_xfade_0 @ 0x558596a39ac0] Failed to configure output pad on Parsed_xfade_0

Error reinitializing filters!

Failed to inject frame into filter network: Invalid argument

Error while processing the decoded data for stream #1:0