Recherche avancée

Médias (91)

Autres articles (94)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (12476)

  • Ffmpeg : How to account for the changing bounding box size due to rotation

    29 janvier 2020, par Ronak Makwana

    I have multiple input gif and image files to overlay on video. following is command i was trying but didn’t get actual result.

    -y -i Gromoverlayvideo.mp4 -ignore_loop 0 -i chtOekuyI69C4enhdN.gif -ignore_loop 0 -i ZahTVRkpwweJPf9EMO.gif -filter_complex [0:v]scale=iw:ih[outv0];[1:0]scale=411.3303:228.64946,rotate=41.0*PI/180:c=none:ow=rotw(41.0*PI/180):oh=roth(41.0*PI/180)[outv1];[2:0]scale=336.3402:185.56363,rotate=-32.0*PI/180:c=none:ow=rotw(-32.0*PI/180):oh=roth(-32.0*PI/180)[outv2];[outv0][outv1]overlay=9:329:shortest=1[outo0];[outo0][outv2]overlay=255:478:shortest=1 -r 25 -preset superfast 1580281804661.mp4

    this is what i want.

    enter image description here

    this is result :

    enter image description here

    kindly help me calculate x and y overlay points according input height and width and rotation angel.
    thanks

  • FFMPEG "volumedetect" filter in C++

    20 mars 2020, par Hrethric

    I am hoping to use FFMPEG to calculate the volume of my audio files, but find the documentation a bit lacking on this particular issue. Basically I’m trying to do something similar to the following command :

    ffmpeg -i "myfile.mp3" -filter:a volumedetect -f null /dev/null

    But in C++. So creating the filter, and passing my audio frames to the filter seems relatively straightforward by doing something along the lines of :

    avfilter_register_all();
    _volumeFilter = avfilter_get_by_name("volumedetect");

    _filterGraph = avfilter_graph_alloc();

    _volumeFilterCtx = avfilter_graph_alloc_filter(_filterGraph, _volumeFilter, "volumedetect");

    The above code is all successful from a standpoint of initializing the filter. Then when I read a frame I basically do :

    if (frameFinished)
    {
       /* push the audio data from decoded frame into the filtergraph */
       if (av_buffersrc_add_frame_flags(_volumeFilterCtx, _audioFrame, 0) < 0)
       {
           av_log(NULL, AV_LOG_ERROR, "Error while feeding the audio filtergraph\n");
           break;
       }
       /* pull filtered audio from the filtergraph */
       while (1)
       {
           ret = av_buffersink_get_frame(_volumeFilterCtx, _filteredFrame);
           if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
                   break;
       }
    }

    This is where I’m not sure what to do. With regular audio filters it affects the underlying data, and you can just use the resulting frame. With this though, I need to pull data out of the filter to tell me what the volume is. How do I do this ? Or am I going about this the wrong way ?

    Again I simply can’t find any documentation on this, so thanks in advance for the help !

  • decoding an input video using FFmpeg command

    2 février 2023, par david

    How can I decode an input video using an FFmpeg command ?
    
My input videos have ts format.

    


    I am not very familiar with FFmpeg and the command that I found for decoding, decodes the input video to YUV output that can not be displayed.

    


    Question :
    
Which FFmpeg command can I use to decode the input video to the showable file ?

    


    This command below decodes the input video to YUV, but I need a command to decode the video to the same format as input to calculate PSNR between them.

    


    ffmpeg -i input.ts -f rawvideo -pix_fmt yuv420p output.yuv


    


      

    • If I choose ts instead of YUV, it produces a video file that is invalid for FFmpeg.
      What is the problem ?

      


    • 


    • How can I decode an input video to playable format ?

      


    • 


    


    Is this command wrong for decoding ?

    


    ffmpeg -i input.ts -f rawvideo -pix_fmt yuv420p output.ts