
Recherche avancée
Autres articles (54)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8437)
-
how to check the number of duplicated frame for video 1sec ( about 30frames ) from video start and want to apply this for mpdecimate
20 octobre 2018, par cool jobsHow to check the number of duplicated frame for video 1sec ( about 30frames ) from video beginning point ? I want to apply this for mpdecimate with audio. Some videos have three duplicated frames, some videos - 15 duplicated frame, some videos have 19 duplicated frames.. so it looks I need a variable or math operator in ffmpeg as expression.
- some video have duplicated frames at beginning point , but it is
not same always the number of duplicated frames. - if get back variable which is for the number of duplicated frames
from step-1 then split and trim apply mpdecimate with audio as
much as step-1 variable returned, finally do concatenate each other.
Is this possible with -filter_complex for one line ffmpeg command ?
- some video have duplicated frames at beginning point , but it is
-
FFMPEG merging audio and video to get resulting video
26 septembre 2016, par KingI need to merge audio and video using ffmpeg so that, it should result in a video with the same duration as of audio.
I have tried 2 commands for that requirement in my linux terminal. Both the commands work for a few of the input videos ; but for some other input_videos, they produce output same as the input video, the audio doesn’t get merged.
The commands, I have tried are -
ffmpeg -i wonders.mp4 -i Carefull.mp3 -c copy testvid.mp4
and
ffmpeg -i wonders.mp4 -i Carefull.mp3 -strict -2 testvid.mp4
and
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -strict
experimental output.mp4and these are my input videos -
- samplevid.mp4
duration - 28 seconds
size - 1.1 MB
status - working
And
- wonders.mp4
duration - 97 seconds
size - 96 MB
status - not working
I have observed that the large size (more than 2MB) of the input video is probably the issue.
But, still I want the fix.
-
FFMPEG C++ API audio/video sync, video is longer [closed]
11 mai 2023, par Gábor GomborI create a video from frames in C++ with a given FPS and supply it to FFMPEG API. I record from an Unreal engine viewport and feed FFMPEG with the images. In this interval I have also an audio track in FLAC which I want sync with the video. When the music ends, I close the video and merge them, but the final video has sync problems, the video is a little bit longer than the audio, so I will have an increasing delay. For example I record 0:55 secs, the audio is ok=same length, but the video from frames will be 0:56 secs.


I think the following code is problematic :


bool MyVideoExporter::writeFrame(OutputStream* oStream, AVFrame* frame, int& framePTS)
{
 auto* packet = oStream->pkt->pkt;
 auto* codecContext = oStream->enc->codecContext;

 frame->pts = framePTS;
 frame->pkt_dts = frame->pts;

 auto ret = avcodec_send_frame(codecContext, frame);
 if (ret >= 0) {
 auto retVal = 0;
 while (retVal >= 0) {
 retVal = avcodec_receive_packet(codecContext, packet);
 if (retVal == AVERROR(EAGAIN) || retVal == AVERROR_EOF) break;
 else if (retVal < 0) {
 return false;
 }

 // rescale to audio, usually 1/44100
 av_packet_rescale_ts(packet, m_audiotimestamp, oStream->st->time_base);
 // rescale to FPS, usually 1/30 or 1/60
 av_packet_rescale_ts(packet, codecContext->time_base, oStream->st->time_base);

 packet->stream_index = oStream->st->index;

 retVal = av_interleaved_write_frame(m_avFormatContext.avFormatContext, packet);
 if (retVal < 0) {
 return false;
 }

 framePTS++;
 }

 return retVal == AVERROR_EOF;
 }

 return false;
}




Any idea what is wrong ?


I tried change the order of av_packet_rescale_ts lines or move the frame increase code to another places, but getting far worse results.