Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (57)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (8652)

  • FFMPEG : merge images and audio to create a video [duplicate]

    3 juillet 2018, par Sahil Kapoor

    This question already has an answer here :

    I am new to ffmpeg. I have a requirement where I want to merge 500 images with one audio and generate an output video. Here it is required that each image is displayed for 5 seconds and then next image is displayed for 5 seconds and so on. The audio keeps on playing continuously in the background. For this I am using the following ffmpeg command :

    ffmpeg -f image2 -framerate 1/5 -pattern_type glob -i '*.jpg' -i audio.mp3 -c:v libx264 -vf scale=480:360 -pix_fmt yuv420p out.mp4

    The above command generates the correct video but that video does not play on VLC player. Also if I play it at 2x or 3x speed (we have to give user this option), then it hangs on a single frame.

    However, if I modify the command and use "-r 25" option then the video playes in VLC player and also works correctly at 2X and 3x speed.

    ffmpeg -f image2 -framerate 1/5 -pattern_type glob -i '*.jpg' -i audio.mp3 -c:v libx264 -vf scale=480:360 -pix_fmt yuv420p -r 25 out_25f.mp4

    Using the above command is not desired as it generates a very large video(4x bigger) and takes up 5x more time to generate.

    Please share help me in optimizing the command I am using to generate desired result.

  • Overlay with fade in-out not showing

    3 mai 2018, par Pier Giorgio Misley

    I can actually add an overlay image in the center of a video from a start time A to an end time B.

    Reading here and there I tried to add a fade-in/fade-out effect to my overlayed image but the result is the image not showing at all in my final video.

    This is my "experiment" :

    -i output.mp4 -i 1.png -filter_complex "[1:v]format=rgba,scale=300:300,fade=in:st=0:d=1:alpha=1,fade=out:st=5:d=1[im];[0][im]overlay=(main_w-overlay_w)/2:(main_h - overlay_h) / 2:enable='between(t,0,5)'" -pix_fmt yuv420p -c:a copy output_0.mp4

    If in understood :

    fade=in:st=0:d=1

    Means that the image should appear at 0’’ with a fade-in effect of 1’’ duration

    fade=out:st=5:d=1

    Means that the image should disappear at 5’’ with a fade-out effect of 1’’ duration

    Isn’t it ?

    Second part would be to add a zoom effect of the image when it is fading in, can I combine the fade-in and the zoom effect toghether ?

    I think that something like this should zoom for a duration of 3 seconds my image overlay, can I add it with another "," separating the filter to the overlayed image ?

    zoompan=z='if(lte(zoom,1.0),1.5,max(1.001,zoom-0.0015))':d=3

    Thanks

    edit :

    The video is 10 sec long. I want an image to be shown with fade in-out and zoom in from 0 to 5 and another from 5 to 10 with the same effect

    Step 1 : adding fade in-out

    -i 0_vid.mp4 -loop 1 -t 1 -i 1.png -filter_complex "[1:v]format=rgba,fade=in:st=0:d=1:alpha=1,fade=out:st=4:d=1:alpha=1[im];[0][im]overlay=(main_w-overlay_w)/2:(main_h - overlay_h)/2:shortest=1" -pix_fmt yuv420p -c:a copy output_0.mp4

    With this solution the fade in works, but the fade out is not even taken in consideration.

    If I add the zoom :

    -i 0_vid.mp4 -loop 1 -t 1 -i 1.png -filter_complex "[1:v]format=rgba,zoompan=z='if(lte(zoom,1.0),1.5,max(1.3875,zoom-0.0015))':d=625,fade=in:st=0:d=1:alpha=1,fade=out:st=4:d=1:alpha=1[im];[0][im]overlay=(main_w-overlay_w)/2:(main_h - overlay_h)/2:shortest=1" -pix_fmt yuv420p -c:a copy output_0.mp4

    This way the image is stretched full screen width and the height is cut. But during the 5 sec animation the image is not resized as normal.

    What I’m aiming for is the image to be zoomed in the video but not to be cut/stretched. I would like to have the image to look smaller at the start and bigger at the end. Is it possible ?

    And also, what am I doing wrong with fade out animation

  • Stack AVFrame side by side (libav/ffmpeg)

    22 février 2018, par dronemastersaga

    So I am trying to combine two H264 livestreams of 1920x1080 resolution side-by-side to a livestream of 3840x1080 resolution.

    For this, I can decode streams to AVFrames in libav/FFmpeg that I would like to combine into a bigger frame. The Input AVFrames : Two 1920x1080 frames in NV12 format (description : planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V))

    The way I have figured out is with colorspace conversion (YUV to BGR) in libav, then to change it to OpenCV Mat, then to use hconcat in OpenCV to stack together, then colorspace conversion (BGR to YUV) in AVFormat.

    Below is the method currently being used :

    //Prior code is too long: Basically it decodes 2 streams to AVFrames frame1 and frame2 in a loop
    sws_scale(swsContext, (const uint8_t *const *) frame1->data, frame1->linesize, 0, 1080, (uint8_t *const *) frameBGR1->data, frameBGR1->linesize);
    sws_scale(swsContext, (const uint8_t *const *) frame2->data, frame2->linesize, 0, 1080, (uint8_t *const *) frameBGR2->data, frameBGR2->linesize);
    Mat matFrame1(1080, 1920, CV_8UC3, frameBGR1->data[0], (size_t) frameBGR1->linesize[0]);
    Mat matFrame2(1080, 1920, CV_8UC3, frameBGR2->data[0], (size_t) frameBGR2->linesize[0]);
    Mat fullFrame;
    hconcat(matFrame1, matFrame2, fullFrame);
    const int stride[] = { static_cast<int>(fullFrame.step[0]) };
    sws_scale(modifyContext, (const uint8_t * const *)&amp;fullFrame.data, stride, 0, fullFrame.rows, newFrame->data, newFrame->linesize);
    //From here, newFrame is sent to the encoder
    </int>

    The resulting image is satisfactory but it does lose quality in colorspace conversion. However this method is too slow to use (I’m at 15 fps and I need 30). Is there a way to stack AVFrames directly without colorspace conversion ? Or is there any better way to do this ? I searched a lot about this and I couldn’t find any solution to this. Please advise.