Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (64)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

Sur d’autres sites (7165)

  • Download highest quality audio from YouTube using youtube-dl

    3 juin 2020, par darvast

    I'm using this command :

    



    youtube-dl -f bestaudio --extract-audio --audio-format "opus" --add-metadata -o "%(playlist_index)s-%(title)s.%(ext)s" "https://www.youtube.com/playlist?list=OLAK5uy_lWRq5MhPNthDDe1nYXtlekDA40wtrpKE0"


    



    Here's the available streams :

    



    [info] Available formats for 6t1dErgAglk:
format code  extension  resolution note
249          webm       audio only tiny   58k , opus @ 50k (48000Hz), 416.34KiB
250          webm       audio only tiny   72k , opus @ 70k (48000Hz), 516.52KiB
140          m4a        audio only tiny  130k , m4a_dash container, mp4a.40.2@128k (44100Hz), 1.06MiB
251          webm       audio only tiny  131k , opus @160k (48000Hz), 923.79KiB
278          webm       140x144    144p   32k , webm container, vp9, 25fps, video only, 159.74KiB
160          mp4        140x144    144p   54k , avc1.4d400b, 25fps, video only, 278.62KiB
242          webm       232x240    240p   71k , vp9, 25fps, video only, 321.29KiB
134          mp4        350x360    360p   96k , avc1.4d4015, 25fps, video only, 303.75KiB
133          mp4        232x240    240p  124k , avc1.4d400c, 25fps, video only, 651.46KiB
243          webm       350x360    360p  126k , vp9, 25fps, video only, 545.77KiB
135          mp4        466x480    360p  174k , avc1.4d401e, 25fps, video only, 534.97KiB
244          webm       466x480    360p  215k , vp9, 25fps, video only, 1003.20KiB
136          mp4        698x720    720p  305k , avc1.4d401f, 25fps, video only, 942.76KiB
137          mp4        1048x1080  1080p  494k , avc1.640020, 25fps, video only, 1.49MiB
247          webm       698x720    720p  593k , vp9, 25fps, video only, 1.97MiB
248          webm       1048x1080  1080p  768k , vp9, 25fps, video only, 3.81MiB
18           mp4        350x360    360p  213k , avc1.42001E, 25fps, mp4a.40.2@ 96k (44100Hz), 1.73MiB
22           mp4        698x720    720p  242k , avc1.64001F, 25fps, mp4a.40.2@192k (44100Hz) (best)


    



    When I run the above command it seems to be converting m4a to opus : https://prnt.sc/st2l8u

    



    I'm wondering why it's doing that instead of getting it from the webm container ?

    


  • Fail to decode a video with ffmpeg, but it can be played by video player

    23 juin 2022, par south

    i have a video witch can be played by players. But, I failed to decode it using ffmpeg 3.4.
Actually, it failed on the ffmpeg libs compiled by myself, but success on a common ffmpeg-3.4 lib of my company.

    



    My compilation seems success, as i can use it to decode most of my videos.

    



    Whats wrong with my lib ? If i should enable some special options when compiling ?
Anything special on this video ?

    



    error message :

    



    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x31b7120] STSC entry 1 is invalid (first=12 count=0 id=1)
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x31b7120] stream 0, contradictionary STSC and STCO
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x31b7120] error reading header


    



    video info dumped when i use libs of my company

    



    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'aaa':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42isom
    creation_time   : 2019-08-06T16:42:23.000000Z
  Duration: 00:00:10.89, start: 0.000000, bitrate: N/A
    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 2815 kb/s, 25.66 fps, 25.64 tbr, 1k tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2019-08-06T16:42:24.000000Z
      handler_name    :
      encoder         : VC Coding
--------------------


    


  • Transcoded video stream unplayable in QuickTime player

    30 novembre 2017, par Nikolai Linetskiy

    Currently I’m writing software for transcoding media files using ffmpeg libs. The problem is that in case of H264 QuickTime cannot play result stream and shows black screen. Audio streams work as expected. I have read that QuickTime can deal only with yuv420p pixel format and that is true for encoded video.

    I looked through the ffmpeg examples and ffmpeg source code and could not find anything find any clues where the problem might be. I would really appreciate any help.

    The only thing I managed to get from QuickTime is
    SeqAndPicParamSetFromCFDictionaryRef, bad config record message in console. Same thing is logged by AVPlayer from AVFoundation.

    Here is the initialization of output streams and encoders.

    int status;

    // avformat_alloc_output_context2()
    if ((status = formatContext.open(destFilename)) < 0) {
       return status;
    }

    AVDictionary *fmtOptions = nullptr;
    av_dict_set(&fmtOptions, "movflags", "faststart", 0);
    av_dict_set(&fmtOptions, "brand", "mp42", 0);

    streams.resize(input->getStreamsCount());
    for (int i = 0; i < input->getStreamsCount(); ++i) {
       AVStream *inputStream = input->getStreamAtIndex(i);
       CodecContext &decoderContext = input->getDecoderAtIndex(i);

       // retrieve output codec by codec id
       auto encoderCodecId = decoderContext.getCodecID();;
       if (decoderContext.getCodecType() == AVMEDIA_TYPE_VIDEO || decoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO) {
           int codecIdKey = decoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO ? IPROC_KEY_INT(TargetAudioCodecID) : IPROC_KEY_INT(TargetVideoCodecID);
           auto codecIdParam = static_cast<avcodecid>(params[codecIdKey]);
           if (codecIdParam != AV_CODEC_ID_NONE) {
               encoderCodecId = codecIdParam;
           }
       }
       AVCodec *encoder = nullptr;
       if ((encoder = avcodec_find_encoder(encoderCodecId)) == nullptr) {
           status = AVERROR_ENCODER_NOT_FOUND;
           return status;
       }

       // create stream with specific codec and format
       AVStream *outputStream = nullptr;
       // avformat_new_stream()
       if ((outputStream = formatContext.newStream(encoder)) == nullptr) {
           return AVERROR(ENOMEM);
       }


       CodecContext encoderContext;
       // avcodec_alloc_context3()
       if ((status = encoderContext.init(encoder)) &lt; 0) {
           return status;
       }

       outputStream->disposition = inputStream->disposition;
       encoderContext.getRawCtx()->chroma_sample_location = decoderContext.getRawCtx()->chroma_sample_location;

       if (encoderContext.getCodecType() == AVMEDIA_TYPE_VIDEO) {
           auto lang = av_dict_get(input->getStreamAtIndex(i)->metadata, "language", nullptr, 0);
           if (lang) {
               av_dict_set(&amp;outputStream->metadata, "language", lang->value, 0);
           }

           // prepare encoder context
           int targetWidth = params[IPROC_KEY_INT(TargetVideoWidth)];
           int targetHeight = params[IPROC_KEY_INT(TargetVideHeight)];



           encoderContext.width() = targetWidth > 0 ? targetWidth : decoderContext.width();
           encoderContext.height() = targetHeight > 0 ? targetHeight : decoderContext.height();
           encoderContext.pixelFormat() = encoder->pix_fmts ? encoder->pix_fmts[0] : decoderContext.pixelFormat();;
           encoderContext.timeBase() = decoderContext.timeBase();
           encoderContext.getRawCtx()->level = 31;
           encoderContext.getRawCtx()->gop_size = 25;

           double far = static_cast<double>(encoderContext.getRawCtx()->width) / encoderContext.getRawCtx()->height;
           double dar = static_cast<double>(decoderContext.width()) / decoderContext.height();
           encoderContext.sampleAspectRatio() = av_d2q(dar / far, 255);


           encoderContext.getRawCtx()->bits_per_raw_sample = FFMIN(decoderContext.getRawCtx()->bits_per_raw_sample,
                                                                   av_pix_fmt_desc_get(encoderContext.pixelFormat())->comp[0].depth);
           encoderContext.getRawCtx()->framerate = inputStream->r_frame_rate;
           outputStream->avg_frame_rate = encoderContext.getRawCtx()->framerate;

           VideoFilterGraphParameters params;
           params.height = encoderContext.height();
           params.width = encoderContext.width();
           params.pixelFormat = encoderContext.pixelFormat();
           if ((status = generateGraph(decoderContext, encoderContext, params, streams[i].filterGraph)) &lt; 0) {
               return status;
           }

       } else if (encoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO) {
           auto lang = av_dict_get(input->getStreamAtIndex(i)->metadata, "language", nullptr, 0);
           if (lang) {
               av_dict_set(&amp;outputStream->metadata, "language", lang->value, 0);
           }

           encoderContext.sampleRate() = params[IPROC_KEY_INT(TargetAudioSampleRate)] ? : decoderContext.sampleRate();
           encoderContext.channels() = params[IPROC_KEY_INT(TargetAudioChannels)] ? : decoderContext.channels();
           auto paramChannelLayout = params[IPROC_KEY_INT(TargetAudioChannelLayout)];
           if (paramChannelLayout) {
               encoderContext.channelLayout() = paramChannelLayout;
           } else {
               encoderContext.channelLayout() = av_get_default_channel_layout(encoderContext.channels());
           }

           AVSampleFormat sampleFormatParam = static_cast<avsampleformat>(params[IPROC_KEY_INT(TargetAudioSampleFormat)]);
           if (sampleFormatParam != AV_SAMPLE_FMT_NONE) {
               encoderContext.sampleFormat() = sampleFormatParam;
           } else if (encoder->sample_fmts) {
               encoderContext.sampleFormat() = encoder->sample_fmts[0];
           } else {
               encoderContext.sampleFormat() = decoderContext.sampleFormat();
           }

           encoderContext.timeBase().num = 1;
           encoderContext.timeBase().den = encoderContext.sampleRate();

           AudioFilterGraphParameters params;
           params.channelLayout = encoderContext.channelLayout();
           params.channels = encoderContext.channels();
           params.format = encoderContext.sampleFormat();
           params.sampleRate = encoderContext.sampleRate();
           if ((status = generateGraph(decoderContext, encoderContext, params, streams[i].filterGraph)) &lt; 0) {
               return status;
           }
       }

       // before using encoder, we should open it and update its parameters
       printf("Codec bits per sample %d\n", av_get_bits_per_sample(encoderCodecId));
       AVDictionary *options = nullptr;
       // avcodec_open2()
       if ((status = encoderContext.open(encoder, &amp;options)) &lt; 0) {
           return status;
       }
       if (streams[i].filterGraph) {
           streams[i].filterGraph.setOutputFrameSize(encoderContext.getFrameSize());
       }
       // avcodec_parameters_from_context()
       if ((status = encoderContext.fillParamters(outputStream->codecpar)) &lt; 0) {
           return status;
       }
       outputStream->codecpar->format = encoderContext.getRawCtx()->pix_fmt;

       if (formatContext.getRawCtx()->oformat->flags &amp; AVFMT_GLOBALHEADER) {
           encoderContext.getRawCtx()->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (encoderContext.getRawCtx()->nb_coded_side_data) {
           int i;

           for (i = 0; i &lt; encoderContext.getRawCtx()->nb_coded_side_data; i++) {
               const AVPacketSideData *sd_src = &amp;encoderContext.getRawCtx()->coded_side_data[i];
               uint8_t *dst_data;

               dst_data = av_stream_new_side_data(outputStream, sd_src->type, sd_src->size);
               if (!dst_data)
                   return AVERROR(ENOMEM);
               memcpy(dst_data, sd_src->data, sd_src->size);
           }
       }

       /*
        * Add global input side data. For now this is naive, and copies it
        * from the input stream's global side data. All side data should
        * really be funneled over AVFrame and libavfilter, then added back to
        * packet side data, and then potentially using the first packet for
        * global side data.
        */
       for (int i = 0; i &lt; inputStream->nb_side_data; i++) {
           AVPacketSideData *sd = &amp;inputStream->side_data[i];
           uint8_t *dst = av_stream_new_side_data(outputStream, sd->type, sd->size);
           if (!dst)
               return AVERROR(ENOMEM);
           memcpy(dst, sd->data, sd->size);
       }

       // copy timebase while removing common factors
       if (outputStream->time_base.num &lt;= 0 || outputStream->time_base.den &lt;= 0) {
           outputStream->time_base = av_add_q(encoderContext.timeBase(), (AVRational){0, 1});
       }

       // copy estimated duration as a hint to the muxer
       if (outputStream->duration &lt;= 0 &amp;&amp; inputStream->duration > 0) {
           outputStream->duration = av_rescale_q(inputStream->duration, inputStream->time_base, outputStream->time_base);
       }

       streams[i].codecType = encoderContext.getRawCtx()->codec_type;
       streams[i].codec = std::move(encoderContext);
       streams[i].streamIndex = i;
    }

    // avio_open() and avformat_write_header()
    if ((status = formatContext.writeHeader(fmtOptions)) &lt; 0) {
       return status;
    }

    formatContext.dumpFormat();
    </avsampleformat></double></double></avcodecid>

    Reading from stream.

    int InputProcessor::performStep() {
       int status;

       Packet nextPacket;
       if ((status = input->getFormatContext().readFrame(nextPacket)) &lt; 0) {
           return status;
       }
       ++streams[nextPacket.getStreamIndex()].readPackets;
       int streamIndex = nextPacket.getStreamIndex();
       CodecContext &amp;decoder = input->getDecoderAtIndex(streamIndex);
       AVStream *inputStream = input->getStreamAtIndex(streamIndex);

       if (streams[nextPacket.getStreamIndex()].readPackets == 1) {
           for (int i = 0; i &lt; inputStream->nb_side_data; ++i) {
               AVPacketSideData *src_sd = &amp;inputStream->side_data[i];
               uint8_t *dst_data;

               if (src_sd->type == AV_PKT_DATA_DISPLAYMATRIX) {
                   continue;
               }
               if (av_packet_get_side_data(nextPacket.getRawPtr(), src_sd->type, nullptr)) {
                   continue;
               }
               dst_data = av_packet_new_side_data(nextPacket.getRawPtr(), src_sd->type, src_sd->size);
               if (!dst_data) {
                   return AVERROR(ENOMEM);
               }
               memcpy(dst_data, src_sd->data, src_sd->size);
           }
       }

       nextPacket.rescaleTimestamps(inputStream->time_base, decoder.timeBase());

       status = decodePacket(&amp;nextPacket, nextPacket.getStreamIndex());
       if (status &lt; 0 &amp;&amp; status != AVERROR(EAGAIN)) {
           return status;
       }
       return 0;
    }

    Here is decoding/encoding code.

    int InputProcessor::decodePacket(Packet *packet, int streamIndex) {
       int status;
       int sendStatus;

       auto &amp;decoder = input->getDecoderAtIndex(streamIndex);

       do {
           if (packet == nullptr) {

               sendStatus = decoder.flushDecodedFrames();
           } else {
               sendStatus = decoder.sendPacket(*packet);
           }

           if (sendStatus &lt; 0 &amp;&amp; sendStatus != AVERROR(EAGAIN) &amp;&amp; sendStatus != AVERROR_EOF) {
               return sendStatus;
           }
           if (sendStatus == 0 &amp;&amp; packet) {
               ++streams[streamIndex].decodedPackets;
           }

           Frame decodedFrame;
           while (true) {
               if ((status = decoder.receiveFrame(decodedFrame)) &lt; 0) {
                   break;
               }
               ++streams[streamIndex].decodedFrames;
               if ((status = filterAndWriteFrame(&amp;decodedFrame, streamIndex)) &lt; 0) {
                   break;
               }
               decodedFrame.unref();
           }
       } while (sendStatus == AVERROR(EAGAIN));

    return status;
    }

    int InputProcessor::encodeAndWriteFrame(Frame *frame, int streamIndex) {
       assert(input->isValid());
       assert(formatContext);

       int status = 0;
       int sendStatus;

       Packet packet;

       CodecContext &amp;encoderContext = streams[streamIndex].codec;

       do {
           if (frame) {
               sendStatus = encoderContext.sendFrame(*frame);
           } else {
               sendStatus = encoderContext.flushEncodedPackets();
           }
           if (sendStatus &lt; 0 &amp;&amp; sendStatus != AVERROR(EAGAIN) &amp;&amp; sendStatus != AVERROR_EOF) {
               return status;
           }
           if (sendStatus == 0 &amp;&amp; frame) {
               ++streams[streamIndex].encodedFrames;
           }

           while (true) {
               if ((status = encoderContext.receivePacket(packet)) &lt; 0) {
                   break;
               }
               ++streams[streamIndex].encodedPackets;
               packet.setStreamIndex(streamIndex);
               auto sourceTimebase = encoderContext.timeBase();
               auto dstTimebase = formatContext.getStreams()[streamIndex]->time_base;
               packet.rescaleTimestamps(sourceTimebase, dstTimebase);
               if ((status = formatContext.writeFrameInterleaved(packet)) &lt; 0) {
                   return status;
               }
               packet.unref();
           }
       } while (sendStatus == AVERROR(EAGAIN));

       if (status != AVERROR(EAGAIN)) {
           return status;
       }

       return 0;
    }

    FFprobe output for original video.

    Input #0, matroska,webm, from 'testvideo':
     Metadata:
       title           : TestVideo
       encoder         : libebml v1.3.0 + libmatroska v1.4.0
       creation_time   : 2014-12-23T03:38:05.000000Z
     Duration: 00:02:29.25, start: 0.000000, bitrate: 79549 kb/s
       Stream #0:0(rus): Video: h264 (High 4:4:4 Predictive), yuv444p10le(pc, bt709, progressive), 2048x858 [SAR 1:1 DAR 1024:429], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Stream #0:1(rus): Audio: pcm_s24le, 48000 Hz, 6 channels, s32 (24 bit), 6912 kb/s (default)

    Transcoded :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '123.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
     Duration: 00:02:29.27, start: 0.000000, bitrate: 4282 kb/s
       Stream #0:0(rus): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 192:143 DAR 1024:429], 3940 kb/s, 24.01 fps, 24 tbr, 12288 tbn, 96 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(rus): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 336 kb/s (default)
       Metadata:
         handler_name    : SoundHandler