
Recherche avancée
Autres articles (73)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7133)
-
ffmpeg how to calculate complete frame ['I' frame ] from [ 'P' frame ]. It is conceptually correct ?
15 février 2014, par WhoamiI am trying to get the knowledge of ffmpeg streaming video handling.
what I understood :
I get from the IPed Camaera, the frames like 'IPPPPPPPPPPPPPPPPIPPPPPPP'..
Frame 'I' is a complete frame, where as frame 'P' depends on the previous either 'P' or 'I' frame which ever is the closes.
I get the frame by using avcodec_decode_video2
while (av_read_frame (context, &packet) >=0)
{
//LOGD (" Received PACKET...DTS and PTS %ld and %ld ", packet.pts, packet.dts);
if(packet.stream_index == videoStreamIndex ) {
avcodec_decode_video2 (pCodecCtx, pFrame, &finished, &packet);
if ( finished) {
// Here is my frame, getting the type by av_get_picture_type_char(pFrame->pict_type).
}
}Now, When i display just the frames that i have received, looks like whenever 'I' frame received, it displays properly, when received 'P' frames, the image goes for a toss.
1) We need to manually do any calculation to convert 'P' Frame to 'I' Frame so that it can be rendered ?
2) If not (1), what do i have to take care ?..Does DTS/PTS calcuation do the magic here ?
-
Creating .ts chunks from .mp4 file
11 juin 2015, par kopalvichI need to create a .ts chunks from a .mp4 file without using ffmpeg API. I have already implemented this task using ffmpeg API, and it works, but my team lead wants me to get rid of it, and it is impossible to convince him against that. The code is supposed to run under load.
I have already found out how to read mp4 format, where to find frames, their offsets and sizes in a file, their pts’s, for audio and video. A huge achievement for me as I haven’t been working in this area for long.
All I managed is to create playlist where each chunk starts with I-frame. I feed the start and stop pts’s of playlist entry to the old code(slightly modified) using ffmpeg API, and it creates proper ts chunks.
But but to create ts chunks without ffmpeg API I still cannot.
At first I tried to read the code of nginx rtmp module, the hls part of it, but couldn’t understand anything there. It’s too complex and I lack specific knowledge.
Now I’d like to read something on ts format. Can anyone advise where to look ?Thanks.
-
How can i remove portions of audio instead of concatenate the portions
16 octobre 2022, par PlainsageIf i've got this command that concatenates audio between 115-368 seconds and 605 to the end and the output only has this audio.


ffmpeg -i "abc.mp3" -filter_complex "[0:a]atrim=start=115:end=368,asetpts=PTS-STARTPTS[ba];[0:a]atrim=start=605,asetpts=PTS-STARTPTS[da]" -b:a 320k "def.mp3"



How can i make this same command so that instead of the output being concatenation of these 2, it removes these two portions and the output has the remaining audio in it


I know there's a way where i can just instead get the start and end of the audio i want and concatenate those, but i'd like to know for my knowledge if there is a way to remove the audio, rather than concatenate into the output.