Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (90)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (15462)

  • AVFrame confusion between width, height and linesize

    27 août 2019, par Lucas Zanella

    I’m trying to understand AVFrame, specially the linesize property.

    Here’s my decoded AVFrame properties :

    width = 640
    height= 360

    linesize[0] = 640
    linesize[1] = 320
    linesize[2] = 320

    It’s an YUV 420 planar image (AV_PIX_FMT_YUVJ420P)

    I’m reading this code and here’s the part that deals with AVFrame properties :

    int linesize = qAbs(m_format.renderFrame->linesize[i]);
    AVRational widthRational = params.yuvwidths[i];
    AVRational heightRational = params.yuvheights[i];
    int width = linesize * widthRational.num / widthRational.den;
    int height = m_format.renderFrame->height * heightRational.num / heightRational.den;
    glTexImage2D ( GL_TEXTURE_2D, 0, params.yuvInternalformat[i],width ,height, 0, params.yuvGlFormat[i], params.dataType, NULL);

    Where, for YUV420P, widthRational and heightRational are 1/1 and 1/1 for i=0, and 1/2 and 1/2 for i = 1,2. And yuvInternalformat and yuvGlFormat are always GL_RED.

    There are a few things that I can’t understand in ths code :

    Why he takes an absolute value in linesize ? Can linesize be negative ? There’s nothing about negative values in the documentation. I understand why he makes the fraction multiplication in height, but why in linesize ? Shouldn’t linesize be the actual width of the planar image and thus require no multiplication ?

    So what is linesize, how should width and height be calculated in order to use glTexImage2D ?

  • Problem using ffmpeg in python to decode video stream to yuv stream and send to a pipe

    30 août 2019, par 瓦达所多阿萨德阿萨德

    This command works nice and fast in shell :

    ffmpeg -c:v h264_cuvid -i ./myvideo.mp4 -f null -pix-fmt yuv420p -

    It was about 13x speed or 300 frames/sec.
    Then I tried to send the yuv stream to pipe and catch it the main process using the following code in python :

    cmd = ['ffmpeg', '-c:v', 'h264_cuvid', '-i', './myvideo.mp4', '-f', 'image2pipe', '-pix_fmt', 'yuv420p', '-']
    p = subprocess.Popen(cmd, stdout = subprocess.PIPE, stderr=subprocess.PIPE)
    size_Y = int(height * width)
    size_UV = int(size_Y / 4)
    s = time.time()
    Y = p.stdout.read(size_Y)
    U = p.stdout.read(size_UV)
    V = p.stdout.read(size_UV)
    print('read time: ', time.time() - s)

    However, this took seconds to read just one yuv frame. What wronged here ? Im not sure what ffmpeg was sending into the pipe, the yuv planar frames or just the pointers to data planes ?

    the console output :

    [('Number of Frames', 61137), ('FPS', 25.0), ('Frame Shape', (1920, 1080))]
    --Number of Frames: 61137
    --FPS: 25.0
    --Frame Shape: (1920, 1080)
    cmd:  ['ffmpeg', '-c:v', 'h264_cuvid', '-i', './myvideo.mp4', '-f', 'image2pipe', '-pix_fmt', 'yuv420p', '-']
    read time:  5.251002073287964
    1/61137 (0.00%)read time:  2.290238618850708
    2/61137 (0.00%)read time:  1.2984871864318848
    3/61137 (0.00%)read time:  2.2100613117218018
    4/61137 (0.01%)read time:  2.3444178104400635
  • Incorrect duration and bitrate in ffmpeg-encoded audio

    30 mai 2019, par Timmy K

    I am encoding raw data on Android using ffmpeg libraries. The native code reads the audio data from an external device and encodes it into AAC format in an mp4 container. I am finding that the audio data is successfully encoded (I can play it with Groove Music, my default Windows audio player). But the metadata, as reported by ffprobe, has an incorrect duration of 0.05 secs - it’s actually several seconds long. Also the bitrate is reported wrongly as around 65kbps even though I specified 192kbps.

    I’ve tried recordings of various durations but the result is always similar - the (very small) duration and bitrate. I’ve tried various other audio players such as Quicktime but they play only the first 0.05 secs or so of the audio.

    I’ve removed error-checking from the following. The actual code checks every call and no problems are reported.

    Initialisation :

    void AudioWriter::initialise( const char *filePath )
    {
       AVCodecID avCodecID = AVCodecID::AV_CODEC_ID_AAC;
       int bitRate = 192000;
       char *containerFormat = "mp4";
       int sampleRate = 48000;
       int nChannels = 2;

       mAvCodec = avcodec_find_encoder(avCodecID);
       mAvCodecContext = avcodec_alloc_context3(mAvCodec);
       mAvCodecContext->codec_id = avCodecID;
       mAvCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
       mAvCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
       mAvCodecContext->bit_rate = bitRate;
       mAvCodecContext->sample_rate = sampleRate;
       mAvCodecContext->channels = nChannels;
       mAvCodecContext->channel_layout = AV_CH_LAYOUT_STEREO;

       avcodec_open2( mAvCodecContext, mAvCodec, nullptr );

       mAvFormatContext = avformat_alloc_context();

       avformat_alloc_output_context2(&mAvFormatContext, nullptr, containerFormat, nullptr);
       mAvFormatContext->audio_codec = mAvCodec;
       mAvFormatContext->audio_codec_id = avCodecID;
       mAvOutputStream = avformat_new_stream(mAvFormatContext, mAvCodec);
       avcodec_parameters_from_context(mAvOutputStream->codecpar, mAvCodecContext);
       if (!(mAvFormatContext->oformat->flags & AVFMT_NOFILE))
       {
           avio_open(&mAvFormatContext->pb, filePath, AVIO_FLAG_WRITE);
       }

       if ( mAvFormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           mAvCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       avformat_write_header(mAvFormatContext, NULL);

       mAvAudioFrame = av_frame_alloc();
       mAvAudioFrame->nb_samples = mAvCodecContext->frame_size;
       mAvAudioFrame->format = mAvCodecContext->sample_fmt;
       mAvAudioFrame->channel_layout = mAvCodecContext->channel_layout;

       av_samples_get_buffer_size(NULL, mAvCodecContext->channels, mAvCodecContext->frame_size,
                                                    mAvCodecContext->sample_fmt, 0);
       av_frame_get_buffer(mAvAudioFrame, 0);
       av_frame_make_writable(mAvAudioFrame);
       mAvPacket = av_packet_alloc();
     }

    Encoding :

    // SoundRecording is a custom class with the raw samples to be encoded
    bool AudioWriter::encodeToContainer( SoundRecording *soundRecording )
    {
       int ret;
       int frameCount = mAvCodecContext->frame_size;
       int nChannels = mAvCodecContext->channels;
       float *buf = new float[frameCount*nChannels];

       while ( soundRecording->hasReadableData() )
       {
           //Populate the frame
           int samplesRead = soundRecording->read( buf, frameCount*nChannels );
           // Planar data
           int nFrames = samplesRead/nChannels;
           for ( int i = 0; i < nFrames; ++i )
           {
               for (int c = 0; c < nChannels; ++c )
               {
                   samples[c][i] = buf[nChannels*i +c];
               }
           }
           // Fill a gap at the end with silence
           if ( samplesRead < frameCount*nChannels )
           {
               for ( int i = samplesRead; i < frameCount*nChannels; ++i )
               {
                   for (int c = 0; c < nChannels; ++c )
                   {
                       samples[c][i] = 0.0;
                   }
               }
           }

       encodeFrame( mAvAudioFrame ) )
       }

       finish();
    }

    bool AudioWriter::encodeFrame( AVFrame *frame )
    {
       //send the frame for encoding
       int ret;

       if ( frame != nullptr )
       {
           frame->pts = mAudFrameCounter++;
       }
       avcodec_send_frame(mAvCodecContext, frame );

       while (ret >= 0)
       {
           ret = avcodec_receive_packet(mAvCodecContext, mAvPacket);
           if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF )
           {
               break;
           }
           else
               if (ret < 0) {
                return false;
           }
           av_packet_rescale_ts(mAvPacket, mAvCodecContext->time_base, mAvOutputStream->time_base);
           mAvPacket->stream_index = mAvOutputStream->index;

           av_interleaved_write_frame(mAvFormatContext, mAvPacket);
            av_packet_unref(mAvPacket);
       }

       return true;
    }

    void AudioWriter::finish()
    {
       // Flush by sending a null frame
       encodeFrame( nullptr );

       av_write_trailer(mAvFormatContext);
    }

    Since the resultant file contains the recorded music, the code to manipulate the audio data seems to be correct (unless I am overwriting other memory somehow).

    The inaccurate duration and bitrate suggest that information concerning time is not being properly managed. I set the pts of the frames using a simple increasing integer. I’m unclear what the code that sets the timestamp and stream index achieves - and whether it’s even necessary : I copied it from supposedly working code but I’ve seen other code without it.

    Can anyone see what I’m doing wrong ?