Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (102)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (7205)

  • FFMPEG : Decode only portion of a frame

    17 avril 2019, par SAMPro

    I am using libav libraries to decode video and process frames in order to do some image processing. However I have limited resource to decode frames and no hardware acceleration at all. So decoding is the critical section in my project (regarding speed).

    Because only some section of frame is needed to processed, I’m wondering is it possible to decode only a region of interest (a rectangle) and the rest of the frame is not decoded ? And does this make the decoding faster ?

    Edit :
    I have decoding H.264 codec.

  • FFMPEG conversion issue in PHP | 0KB conversion issue

    27 mai 2012, par PHPLearner

    i am using below code in the exec() command, it works in other server, in my dev region just converting into empty flv file.

    i have paid to my host to reinstall the ffmpeg, they have confirmed its good still not getting the output.

    they are claiming that i have a issue in the below syntax. does any one have any thoughts on this ?

    ffmpeg -i /home/hereitis/uploads/videos/MVI_0572.AVI -f flv -s 320x240 /home/hereitis/videos/flv/MVI_0572.flv

    Thank you so mcuh for your interest to read this.

  • Increase/Decrease audio volume using FFmpeg

    9 juin 2016, par williamtroup

    I’m am currently using C# invokes to call the FFmpeg APIs to handle video and audio. I have the following code in place to extract the audio from a video and write it to a file.

    while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
    {
       if (packet.stream_index == streamIndex)
       {
           while (packet.size > 0)
           {
               int frameDecoded;
               int frameDecodedResult = ffmpeg.avcodec_decode_audio4(codecContext, frame, &frameDecoded, packet);

               if (frameDecoded > 0 && frameDecodedResult >= 0)
               {
                   //writeAudio.WriteFrame(frame);

                   packet.data += totalBytesDecoded;
                   packet.size -= totalBytesDecoded;
               }
           }

           frameIndex++;
       }

       Avcodec.av_free_packet(&packet);
    }

    This is all working correctly. I’m currently using the FFmpeg.AutoGen project for the API access.

    I want to be able to increase/decrease the volume of the audio before its written to the file, but I cannot seem to find a command or any help with this. Does it have to be done manually ?

    Update 1 :

    After receiving some help, this is the class layout I have :

    public unsafe class FilterVolume
    {
       #region Private Member Variables

       private AVFilterGraph* m_filterGraph = null;
       private AVFilterContext* m_aBufferSourceFilterContext = null;
       private AVFilterContext* m_aBufferSinkFilterContext = null;

       #endregion

       #region Private Constant Member Variables

       private const int EAGAIN = 11;

       #endregion

       public FilterVolume(AVCodecContext* codecContext, AVStream* stream, float volume)
       {
           CodecContext = codecContext;
           Stream = stream;
           Volume = volume;

           Initialise();
       }

       public AVFrame* Adjust(AVFrame* frame)
       {
           AVFrame* returnFilteredFrame = ffmpeg.av_frame_alloc();

           if (m_aBufferSourceFilterContext != null && m_aBufferSinkFilterContext != null)
           {
               int bufferSourceAddFrameResult = ffmpeg.av_buffersrc_add_frame(m_aBufferSourceFilterContext, frame);
               if (bufferSourceAddFrameResult < 0)
               {
               }

               int bufferSinkGetFrameResult = ffmpeg.av_buffersink_get_frame(m_aBufferSinkFilterContext, returnFilteredFrame);
               if (bufferSinkGetFrameResult < 0 && bufferSinkGetFrameResult != -EAGAIN)
               {
               }
           }

           return returnFilteredFrame;
       }

       public void Dispose()
       {
           Cleanup(m_filterGraph);
       }

       #region Private Properties

       private AVCodecContext* CodecContext { get; set; }
       private AVStream* Stream { get; set; }
       private float Volume { get; set; }

       #endregion

       #region Private Setup Helper Functions

       private void Initialise()
       {
           m_filterGraph = GetAllocatedFilterGraph();

           string aBufferFilterArguments = string.Format("sample_fmt={0}:channel_layout={1}:sample_rate={2}:time_base={3}/{4}",
               (int)CodecContext->sample_fmt,
               CodecContext->channel_layout,
               CodecContext->sample_rate,
               Stream->time_base.num,
               Stream->time_base.den);

           AVFilterContext* aBufferSourceFilterContext = CreateFilter("abuffer", m_filterGraph, aBufferFilterArguments);
           AVFilterContext* volumeFilterContext = CreateFilter("volume", m_filterGraph, string.Format("volume={0}", Volume));
           AVFilterContext* aBufferSinkFilterContext = CreateFilter("abuffersink", m_filterGraph);

           LinkFilter(aBufferSourceFilterContext, volumeFilterContext);
           LinkFilter(volumeFilterContext, aBufferSinkFilterContext);

           SetFilterGraphConfiguration(m_filterGraph, null);

           m_aBufferSourceFilterContext = aBufferSourceFilterContext;
           m_aBufferSinkFilterContext = aBufferSinkFilterContext;
       }

       #endregion

       #region Private Cleanup Helper Functions

       private static void Cleanup(AVFilterGraph* filterGraph)
       {
           if (filterGraph != null)
           {
               ffmpeg.avfilter_graph_free(&filterGraph);
           }
       }

       #endregion

       #region Provate Helpers

       private AVFilterGraph* GetAllocatedFilterGraph()
       {
           AVFilterGraph* filterGraph = ffmpeg.avfilter_graph_alloc();
           if (filterGraph == null)
           {
           }

           return filterGraph;
       }

       private AVFilter* GetFilterByName(string name)
       {
           AVFilter* filter = ffmpeg.avfilter_get_by_name(name);
           if (filter == null)
           {
           }

           return filter;
       }

       private void SetFilterGraphConfiguration(AVFilterGraph* filterGraph, void* logContext)
       {
           int filterGraphConfigResult = ffmpeg.avfilter_graph_config(filterGraph, logContext);
           if (filterGraphConfigResult < 0)
           {
           }
       }

       private AVFilterContext* CreateFilter(string filterName, AVFilterGraph* filterGraph, string filterArguments = null)
       {
           AVFilter* filter = GetFilterByName(filterName);
           AVFilterContext* filterContext;

           int aBufferFilterCreateResult = ffmpeg.avfilter_graph_create_filter(&filterContext, filter, filterName, filterArguments, null, filterGraph);
           if (aBufferFilterCreateResult < 0)
           {
           }

           return filterContext;
       }

       private void LinkFilter(AVFilterContext* source, AVFilterContext* destination)
       {
           int filterLinkResult = ffmpeg.avfilter_link(source, 0, destination, 0);
           if (filterLinkResult < 0)
           {
           }
       }

       #endregion
    }

    The Adjust() function is called after a frame is decoded. I’m currently getting a -22 error when av_buffersrc_add_frame() is called. This indicates that a parameter is invalid, but after debugging, I cannot see anything that would be causing this.

    This is how the code is called :

    while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
    {
       if (packet.stream_index == streamIndex)
       {
           while (packet.size > 0)
           {
               int frameDecoded;
               int frameDecodedResult = ffmpeg.avcodec_decode_audio4(codecContext, frame, &frameDecoded, packet);

               if (frameDecoded > 0 && frameDecodedResult >= 0)
               {
                   AVFrame* filteredFrame = m_filterVolume.Adjust(frame);

                   //writeAudio.WriteFrame(filteredFrame);

                   packet.data += totalBytesDecoded;
                   packet.size -= totalBytesDecoded;
               }
           }

           frameIndex++;
       }

       Avcodec.av_free_packet(&packet);
    }

    Update 2 :

    Cracked it. The "channel_layout" option in the filter argument string is supposed to be a hexadecimal. This is what the string formatting should look like :

    string aBufferFilterArguments = string.Format("sample_fmt={0}:channel_layout=0x{1}:sample_rate={2}:time_base={3}/{4}",
       (int)CodecContext->sample_fmt,
       CodecContext->channel_layout,
       CodecContext->sample_rate,
       Stream->time_base.num,
       Stream->time_base.den);