
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (62)
-
Encodage et transformation en formats lisibles sur Internet
10 avril 2011MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (9621)
-
Why does the video lose few seconds after FFMPEG xfade ?
25 août 2023, par promaxdevI am having a use case where I need to add a few xfade transitions to an existing video at uniform intervals. I followed an approach almost similar to the one explained in the official document here and the SO reply here


Here is the command that I use.


ffmpeg 
-i clean_0.mp4 
-filter_complex "[0] split = 8[i1][i2][i3][i4][i5][i6][i7][i8]; 
 [i1]select='between(t\,0.0\,3.75)',setpts='PTS-STARTPTS'[i11]; 
 [i2]select='between(t\,3.75\,7.5)',setpts='PTS-STARTPTS'[i22]; 
 [i3]select='between(t\,7.5\,11.25)',setpts='PTS-STARTPTS'[i33]; 
 [i4]select='between(t\,11.25\,15.0)',setpts='PTS-STARTPTS'[i44]; 
 [i5]select='between(t\,15.0\,18.75)',setpts='PTS-STARTPTS'[i55]; 
 [i6]select='between(t\,18.75\,22.5)',setpts='PTS-STARTPTS'[i66]; 
 [i7]select='between(t\,22.5\,26.25)',setpts='PTS-STARTPTS'[i77]; 
 [i8]select='between(t\,26.25\,30.0)',setpts='PTS-STARTPTS'[i88]; 
 [i11][i22]xfade=duration=1:offset=2.75:transition=dissolve [c1]; 
 [i33][i44]xfade=duration=1:offset=2.75:transition=distance [c2]; 
 [i55][i66]xfade=duration=1:offset=2.75:transition=fadegrays [c3]; 
 [i77][i88]xfade=duration=1:offset=2.75:transition=pixelize [c4]; 
 [c1][c2][c3][c4]concat=n=4:v=1:a=0 " 
-pix_fmt yuv420p -y clean_out.mp4 



(The above code is executed in a single line broken down for ease of understanding.)


What this does is splits the input video in to parts of equal length in duration and and inserts xfade effects in between and the concats them. So the net effect would be the original video with xfade effects added.


The input video is 30 seconds in length and has 25 fps. But the output video is 26 seconds.


Here are my analysis so far.


- 

- the no. of seconds reduced with each added transition. i.e. when 5 transitions are provided, the output video reduces to 25 seconds
- ffprobe at different stages below

- 

- i1, i2,... being copies of input video had 750 frames @ 25fps ie 30 seconds
- i11, i22,... -> had 94 frames @ 25fps resulting in 3.76 seconds (3.76 X 8 = 30.08s)
- c1, c2,.. -> had 163 frames @ 25fps resulting in 6.52 seconds (6.52 X 4 = 26.08s)








- Enabling trace on ffmpeg showed different filters like 'Parsed_select_', 'Parsed_setpts_' and 'Parsed_xfade_', 'Parsed_concat_' corresponding to the 'select', 'setpts', 'xfade' and concat filters and there was another 'auto_scale_' auto inserted by ffmpeg. But details were found only for Parsed_select_ and Parsed_setpts_* filters in rest of the logs. There was no other trace of Parsed_xfade_* filters. So not much info from there








End result is concat of all c* videos resulting in reduced duration


So we can infer that xfade is causing some frames to be lost. (or I am doing it wrong) I need help to find the reason for the reduction in the duration of the final video and fix it.Also, Is there a way to log Xfade trace ?


-
Tools for investigating video corruption — ffmpeg / libavcodec
11 juillet 2013, par GopherkhanIn my current work I'm trying to encode some images to h264 video using the FFMPEG's C library. The resulting video plays fine in VLC, but has no preview image. The video can play in VLC and Mplayer on ubuntu, but won't play on Mac or PC (in fact, it causes a "VTDecoderXPCService quit unexpectedly" error on Mac).
If I run the resulting file through FFMPEG using the command line, the resulting file has a preview image, and plays correctly everywhere.
Apparently the file that I get out of the program is corrupt in some weird place, but I don't have any output during my compilation or run to indicate where. I can't share my code at the moment (work code isn't open source yet :-( ), but I have tried a number of things :
- Writing only header and trailer data (av_write_trailer) and no frames
- writing frames only minus the trailer (using avcodec_encode_video2 and av_write_frame)
- Adjusting our time_base and frame pts values to encode only one frame per second
- Removing all variable frame rate code
- Numerous other variants that I won't bother you with here
In creating my project, I've also followed the following tutorials :
And consulted the deprecated ffmpeg functions list
And compiled FFMPEG on ubuntu according to the official doc
But every run of the program runs into the exact same problem.
My question is, is there anything obvious that causes a programmatic run of FFMpeg to differ from a console run (e.g., an incomplete finalization, some threading issues, etc.) ? Like some obvious reason that a console run could repair a corrupted file ? Or is there a decent tool/method for inspecting a video file and finding the point of corruption ?
-
FFmpeg RTP_Mpegts over RTP protocol
7 mars 2020, par NicolòI’m tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn’t documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
- This works, but on initiation it alerts "[mpegts @ 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
- I’m using
-vcodec copy
because i’ve already verified it in another rtp stream in which-acodec copy
didn’t work. -
This stuck and while closing with Ctrl+C shortcut it prints :
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
- I don’t understand if the client didn’t receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem ?)
-
In the old thread is explained a workaround. On server could run 2 ffmpeg instance :
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile ? -
Is there any better way to do implement this client/server with the lowest latency possible ?
Thanks for any help provided.