Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (34)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4622)

  • Decode audio using libavcodec and play using libAO ?

    26 avril 2018, par Ashika Umanga Umagiliya

    I use following code snippet to decode audio files (tested with MP3,WAV,WMV).

    But when it plays the audio , it just gives static sounds and crashes time to time.
    Any hints on what i am doing wrong here ?

    #include
    #include
    #include
    #include


    extern "C" {
    #include "libavutil/mathematics.h"
    #include "libavformat/avformat.h"
    #include "libswscale/swscale.h"
    #include <ao></ao>ao.h>

    }

    void die(const char *msg)
    {
       fprintf(stderr,"%s\n",msg);
       exit(1);
    }

    int main(int argc, char **argv)
    {

       const char* input_filename=argv[1];

       //avcodec_register_all();
       av_register_all();
       //av_ini

       AVFormatContext* container=avformat_alloc_context();
       if(avformat_open_input(&amp;container,input_filename,NULL,NULL)&lt;0){
           die("Could not open file");
       }

       if(av_find_stream_info(container)&lt;0){
           die("Could not find file info");
       }
       av_dump_format(container,0,input_filename,false);

       int stream_id=-1;
       int i;
       for(i=0;inb_streams;i++){
           if(container->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO){
               stream_id=i;
               break;
           }
       }
       if(stream_id==-1){
           die("Could not find Audio Stream");
       }

       AVDictionary *metadata=container->metadata;

       AVCodecContext *ctx=container->streams[stream_id]->codec;
       AVCodec *codec=avcodec_find_decoder(ctx->codec_id);

       if(codec==NULL){
           die("cannot find codec!");
       }

       if(avcodec_open(ctx,codec)&lt;0){
           die("Codec cannot be found");
       }

       //ctx=avcodec_alloc_context3(codec);

       //initialize AO lib
       ao_initialize();

       int driver=ao_default_driver_id();

       ao_sample_format sformat;
       sformat.bits=16;
       sformat.channels=2;
       sformat.rate=44100;
       sformat.byte_format=AO_FMT_NATIVE;
       sformat.matrix=0;

       ao_device *adevice=ao_open_live(driver,&amp;sformat,NULL);
       //end of init AO LIB

       AVPacket packet;
       av_init_packet(&amp;packet);

       AVFrame *frame=avcodec_alloc_frame();

       int buffer_size=AVCODEC_MAX_AUDIO_FRAME_SIZE;
       uint8_t buffer[buffer_size];
       packet.data=buffer;
       packet.size =buffer_size;



       int len;
       int frameFinished=0;
       while(av_read_frame(container,&amp;packet)>=0)
       {

           if(packet.stream_index==stream_id){
               //printf("Audio Frame read  \n");
               int len=avcodec_decode_audio4(ctx,frame,&amp;frameFinished,&amp;packet);
               //frame->
               if(frameFinished){
                   //printf("Finished reading Frame %d %d\n",packet.size,len);
                   ao_play(adevice, (char*)frame->data, len);
               }

           }


       }

       av_close_input_file(container);
       ao_shutdown();
       return 0;
    }
  • FFMPEG with hardware codec support

    28 janvier 2012, par mctma

    I have built a simple media player using ffmpeg on Android 2.2. Hardware is an arm cortex-a8 based 1GHz processor, 512 MB RAM. I am getting low performance, around 15 FPS for 800x600 mp4 video. I have a couple of questions on how I can improve the performance

    • How can I use the hardware codecs available on my target device. How can I configure ffmpeg to use the available hardware decoders ? Does the GPU or graphics driver have to expose some standard API like OpenMax IL in order to this ?

    • What are the options that should be enabled when building ffmpeg so that it can be optimized for my target hardware ? Something like :
      --cpu=cortex=a8 --extra-cflags="-mfpu=neon" ...

    I have already looked around the net but I couldn't find the answers that I am looking for. I hope someone can advice me on this.

    Thanks in advance !

  • How to get frames from HDR video in scRGB color space ?

    5 mars 2018, par Виталий Синявский

    I want to create a simple video player that will show HDR video on HDR TV. For example, this "LG Chess HDR" video. It is encoded with HEVC, its bit depth is 10 bit, pixel format is YUV420P10LE and it has metadata abount BT2020 color space and PQ transfer function.

    In this NVIDIA article I found the next :

    The display driver takes the scRGB back buffer, and converts it to the
    standard expected by the display presently connected. In general, this
    means converting the color space from sRGB primaries to BT. 2020
    primaries, scaling to an appropriate level, and encoding with a
    mechanism like PQ. Also, possibly performing conversions like RGB to
    YCC if that display connection requires it.

    It means that my player should render pixels in the scRGB color space (linear encoding, sRGB primaries, full range is -0.5 through just less than +7.5). So I need to get frames from the source video in this color space somehow, preferably in FP16 pixel format (half float, 16 bits per one color channel). I come to the following simple pipeline to render videos to HDR :

    source HDR video in BT2020 color space with applied PQ -> [some video library] ->
    -> video frames with colors in scRGB color space -> [my program] ->
    -> rendered video on HDR TV with applied conversions by display driver

    I’m trying to use FFmpeg as this library and do not understand how to get frames from the source HDR video in scRGB color space.

    I use sws_scale FFmpeg method now to get frames and know about filters API. But I did not found any information and help about how to transparantly get frames in scRGB using these functionality without parsing metadata for all source videos and create custom video filters for them.

    Please, tell me what I can do to get frames in the scRGB color space using FFmpeg. Can someone tell other libraries with which I can do it ?