Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (59)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

Sur d’autres sites (6102)

  • ffmpeg to generate dash and HLS - best practise

    8 septembre 2017, par LaborC

    Looking for the correct way to encode a given input video in multiple bitrates and then package it for dash and HLS. I thought this is a basic task, but for me it was quite a challenge. So the way I do it is as follows :

    First I split my video (mp4) into video and audio (I encode the audio, because I need to make sure that the output codec is aac, which is a requirement for web I think).

    ffmpeg -c:v copy -an video_na.mp4 -i source_input.mp4
    ffmpeg -c:a aac -ac 2 -async 1 -vn audio.mp4 -i source_input.mp4

    Then I encode the video with the following commands :

       ffmpeg.exe -i video_na.mp4 -an -c:v libx264 -crf 18 \
    -preset fast -profile:v high -level 4.2 -b:v 2000k -minrate 2000k \
    -maxrate 2000k -bufsize 4000k -g 96 -keyint_min 96 -sc_threshold 0 \
    -filter:v "scale='trunc(oh*a/2)*2:576'" -movflags +faststart \
    -pix_fmt yuv420p -threads 4 -f mp4 video-2000k.mp4

       ffmpeg.exe -i video_na.mp4 -an -c:v libx264 -crf 18 \
    -preset fast -profile:v high -level 4.2 -b:v 1500k -minrate 1500k \
    -maxrate 1500k -bufsize 3000k -g 96 -keyint_min 96 -sc_threshold 0 \
    -filter:v "scale='trunc(oh*a/2)*2:480'" -movflags +faststart \
    -pix_fmt yuv420p -threads 4 -f mp4 video-1500k.mp4

    After that I fragment the videos (I used the parameter —timescale 10000 but then the result was out of sync).
    Sidenote : the -g parameter is 4 times 24 (frames). this is important because the fragmentation is 4000 (4 seconds)

    mp4fragment --fragment-duration 4000 video-2000k.mp4 \
    video-2000k-f.mp4

    mp4fragment --fragment-duration 4000 video-1500k.mp4 \
    video-1500k-f.mp4

    And finally package everything together again for dash (I used to use —use-segment-timeline but then again the result was out-of-sync).
    I use mp4dash and not mp4box because I want to be able to encrypt everything later on for DRM.

    mp4dash --media-prefix=out  \
         video-2000k-f.mp4  \
         video-1500k-f.mp4  \
        --out dash

    The result works in Firefox, Chrome, IE Edge via a webserver and via Cloudfront AWS Streaming also on older browsers.

    So for me there are still 2 tasks to accomplish.
    First I need to generate a HLS package for Apple Phone, IPad Users.
    And second : I need to encrypt everything.

    So far my HLS command is :

    ffmpeg -y -loglevel info ^
           -i video-2000k.mp4 \
           -i video-1500k.mp4 \
           -i audio.mp4 \
           -profile:v baseline -start_number 0 -hls_time 10 \
           -flags -global_header -hls_list_size 0 -f hls hls/master.m3u8

    This basically works, but generates only 1 bandwith without the posibility of multi-streams.
    I am not certain about that statement, but it looks that way.
    Has anyone an idea on what I am doing wrong ?

  • libav works with RTP stream on PC, but not on Android (Same RTP stream)

    25 septembre 2016, par Nitay

    I’m using libav to decode video received from a 3rd party. The video is received in an Android app and is then made into an RTP stream and published to another device.

    When I use the PC as the target device, the stream decodes properly and I see video. When I use android (same code, compiled to android) the video does not decode at all.
    This happens only with the video from the 3rd party. Other video streams works okay both on PC and on Android.

    To be clear :

    • If stream is casted from a command line using ffmpeg -> Video is displayed both on Android & on PC
    • If stream is casted from the Android App -> Video is displayed only on PC (the same code, compiled for different platforms)

    libav 11.7 was compiled to android using the following line on configure :

    NDK=/opt/android-ndk-r12b
    SYSROOT="${NDK}/platforms/android-23/arch-arm/"
    ECFLAGS="-march=armv7-a -mfloat-abi=softfp -I /usr/local/include"
    ELDFLAGS="-Wl,--fix-cortex-a8 -L /usr/local/lib"
    ARCH_SPECIFIC="--disable-asm --arch=arm --cpu=armv7-a --cross-prefix=/opt/android-ndk-r12b/prebuilt/linux-x86_64/bin/../../../toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-"

    ./configure \
    ${ARCH_SPECIFIC} \
    --target-os=linux \
    --sysroot="$SYSROOT" \
    --extra-cflags="$ECFLAGS" \
    --extra-ldflags="$ELDFLAGS" \
    --enable-shared \
    --disable-symver

    (—disabled-asm is unfortunately needed to avoid text-relocations in the compiled library which is not allowed on Android)

    Here are the libav logs from the android side : http://pastebin.com/MDE3N7BD

    The logs starting with LIBAV are libav messaged, the ones without are my own messages wrapped around the libav functions.

    Logs from the PC side : http://pastebin.com/N0Fd18F9

    The loop that reads the frames :

           // If read frame fails (which happens), keep trying
           LOG_DEBUG("Before read frame");
           while (av_read_frame(formatContext, &packet) >= 0 && !terminate)
           {
               LOG_DEBUG1("Packet read. Size: %d", packet.size);

               this->DecodeFrame(videoStreamIndex, formatContext->streams[videoStreamIndex]->codec, &packet);

               av_free_packet(&packet);
               av_init_packet(&packet);
           }

    And here’s the frames decoding code :

    void VideoDecoder::DecodeFrame(int videoStreamIndex, AVCodecContext* streamCodec, AVPacket* packet)
    {
       static bool save_file = false;

       AVPixelFormat destinationFormat = AV_PIX_FMT_RGBA;
       LOG_DEBUG("Decoding frame!");


       if (this->isFirstFrame)
       {
           LOG_DEBUG("Creating codecs");
           this->isFirstFrame = false;
           // For parsing the packets, we first need to create the right codec
           AVCodec* h264Codec = NULL;
           // (I'm not sure about why does ffmpeg need this. It has an SDP file which states exactly that, but okay)
           h264Codec = avcodec_find_decoder(AV_CODEC_ID_H264);

           // Now make a copy of the codec for us to change
           codecContext = avcodec_alloc_context3(h264Codec);
           avcodec_get_context_defaults3(codecContext, h264Codec);
           avcodec_copy_context(codecContext, streamCodec);


           // Initialize codecContext to use codec
           if (avcodec_open2(codecContext, h264Codec, NULL) >= 0)
           {
               // There's a nasty edge case here that we need to handle first
               if (streamCodec->width == 0 || streamCodec->height == 0)
               {
                   // That means that the stream initialized before any packets were sent to it, we can't initialize
                   // any buffers without knowing their size. So to tackle this we'll initialize the largest buffer
                   // can think of

                   codecContext->width = MAX_RESOLUTION_WIDTH;
                   codecContext->height = MAX_RESOLUTION_HEIGHT;
               }

               // Instantiate new buffers
               int size = avpicture_get_size(AV_PIX_FMT_YUV420P, codecContext->width, codecContext->height);
               originalPic = av_frame_alloc();
               originalPicBuffer = (uint8_t*)(av_malloc(size));

               avpicture_fill((AVPicture*)originalPic, originalPicBuffer, AV_PIX_FMT_YUV420P, codecContext->width, codecContext->height);
           }

           // Instantiate an output context, for usage in the conversion of the picture
           outputFormatContext = avformat_alloc_context();
       }

       if ((packet->stream_index == videoStreamIndex) && !terminate)
       {
           // Packet is video. Convert!

           if (outputStream == NULL)
           {
               //create stream in file
               outputStream = avformat_new_stream(outputFormatContext, streamCodec->codec);
               avcodec_copy_context(outputStream->codec, streamCodec);
               outputStream->sample_aspect_ratio = streamCodec->sample_aspect_ratio;
           }

           int pictureReceived = 0;
           packet->stream_index = outputStream->id;
           int result = avcodec_decode_video2(codecContext, originalPic, &pictureReceived, packet);
           //          std::cout << "Bytes decoded " << result << " check " << check << std::endl;

           if (pictureReceived)
           {
               LOG_DEBUG("New frame received");
               // NOTICE: It is generally not a good practice to allocate on demand instead on initialization.
               // It this case the edge cases demand it (what happens if width==0 on the first packet?)
               if (this->imageConvertContext == NULL)
               {
                   // Allocate pictures and buffers for conversion
                   this->imageConvertContext = sws_getContext(
                       codecContext->width,
                       codecContext->height,
                       codecContext->pix_fmt,
                       codecContext->width,
                       codecContext->height,
                       destinationFormat,
                       SWS_BICUBIC,
                       NULL, NULL, NULL);
               }

               if (this->convertedPic == NULL)
               {
                   int size_rgba = avpicture_get_size(destinationFormat, codecContext->width, codecContext->height);
                   convertedPicBuffer = (uint8_t*)(av_malloc(size_rgba));
                   convertedPic = av_frame_alloc();
                   avpicture_fill((AVPicture*)convertedPic, convertedPicBuffer, destinationFormat, codecContext->width, codecContext->height);
               }

               // Scale the image
               sws_scale(imageConvertContext, originalPic->data, originalPic->linesize, 0, codecContext->height, convertedPic->data, convertedPic->linesize);

               // We have a frame! Callback
               if (frameReadyCallback != NULL)
               {
                   LOG_DEBUG3("Updated frame [width=%d, height=%d, ptr=0x%08x]", codecContext->width, codecContext->height, convertedPic->data[0]);
                   if (save_file)
                   {
                       save_file = false;
                       std::string filename = "/storage/emulated/0/DCIM/saved_file.rgba";
                       save_buffer_to_file((unsigned char*)convertedPic->data[0], codecContext->width * codecContext->height * 4, filename.c_str());
                       LOG_DEBUG("Exported file");
                   }
                   frameReadyCallback((char*)convertedPic->data[0], codecContext->width, codecContext->height);
               }
           }
           else
           {
               LOG_DEBUG("Packet without frame");
           }
       }
    }

    Obviously the stream from the 3rd party is somehow different, and probably from a different encoder. But it works with libav (same version) on PC. What could be the difference on Android causing it to not find the frames ?

  • Merge commit 'b3739599bda740ac12d3dde31a331b744df99123'

    23 octobre 2017, par James Almer
    Merge commit 'b3739599bda740ac12d3dde31a331b744df99123'
    

    * commit 'b3739599bda740ac12d3dde31a331b744df99123' :
    lavc : Drop deprecated emu edge functionality

    Merged-by : James Almer <jamrial@gmail.com>

    • [DH] fftools/ffmpeg_opt.c
    • [DH] fftools/ffplay.c
    • [DH] libavcodec/avcodec.h
    • [DH] libavcodec/mjpegenc.c
    • [DH] libavcodec/options_table.h
    • [DH] libavcodec/snowenc.c
    • [DH] libavcodec/utils.c
    • [DH] libavcodec/version.h
    • [DH] libavcodec/wmv2dec.c