Recherche avancée

Médias (1)

Mot : - Tags -/graphisme

Autres articles (53)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7460)

  • swresample/resample : Properly empty MMX state

    11 juin 2022, par Andreas Rheinhardt
    swresample/resample : Properly empty MMX state
    

    There is a x86-32 MMXEXT implementation for resampling
    planar 16bit data. multiple_resample() therefore calls
    emms_c() if it thinks that this needed. And this is bad :

    1. It is a maintenance nightmare because changes to the
    x86 resample DSP code would necessitate changes to the check
    whether to call emms_c().
    2. The return value of av_get_cpu_flags() does not tell
    whether the MMX DSP functions are in use, as they could
    have been overridden by av_force_cpu_flags().
    3. The MMX DSP functions will never be overridden in case of
    an x86-32 build with —disable-sse2. In this scenario lots of
    resampling tests (like swr-resample_exact_lin_async-s16p-8000-48000)
    fail because the cpuflags indicate that SSE2 is available
    (presuming that the test is run on a CPU with SSE2).
    4. The check includes a call to av_get_cpu_flags(). This is not
    optimized away for arches other than x86-32.
    5. The check takes about as much time as emms_c() itself,
    making it pointless.

    This commit therefore removes the check and calls emms_c()
    unconditionally (it is a no-op for non-x86).

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libswresample/resample.c
  • ffmpeg transparent png frames to webm gets autocropped and opaque instead of showing the input frames

    26 mai 2022, par Regda

    i got a webm file that outputs transparency as black if transparency is not supported, but unpacking the webm file to png frames outputs the pngs with transparency.&#xA;I tried to repack the transparent png frames into a webm file that behaves like described above (transparent but black if not supported).

    &#xA;

    The current issue (where i need help) is that it looks like ffmpeg applies some sort of optimization, which creates an area filled with an analyzed color, cropping it to the size of the opaque content/pixels and leaving the rest white.
    &#xA;And i don't want this.

    &#xA;

    Related to VLC the metadata of the original says that Lavc58.65.102 libvpx-vp9 was used as encoder and that the Stream0 has the following informations :

    &#xA;

    Codec: Google/On2&#x27;s VP9 Video (VP90)&#xA;Language: English&#xA;Type: Video&#xA;Video resolution: 816x624&#xA;Buffer dimensions: 816x640&#xA;Frame rate: 30.000300&#xA;Decoded format: Planar 4:2:0 YUV&#xA;Orientation: Top left&#xA;

    &#xA;

    To reproduce the issue i created 2 transparent frames (a green and a blue cross) where the opaque content is smaller then the image size and copied them 10-times to be able to actually preview the webm file in vlc, the 2 frames do look like below :&#xA;shows the 2 different frames used for reproducing the transparency issue

    &#xA;

    Then i clicked together the following ffmpeg command and executed it :
    &#xA;ffmpeg -framerate 30 -t 7.2 -i frames/%04d.png -b:v 1307k -c:v libvpx-vp9 -color_range 1 -pix_fmt yuva420p -metadata:s:v:0 alpha_mode="1" out.webm&#xA;(changing the pixel format e.g. from yuv to yuva doesn't seem to change anything)

    &#xA;

    ffmpeg console output :

    &#xA;

    ffmpeg version N-102605-g4c705a2775 Copyright (c) 2000-2021 the FFmpeg developers&#xA;  built with gcc 10-win32 (GCC) 20210408&#xA;  configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-version3 --disable-debug --enable-shared --disable-static --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --enable-libvmaf --enable-vulkan --enable-amf --enable-libaom --disable-avisynth --enable-libdav1d --disable-libdavs2 --enable-ffnvcodec --enable-cuda-llvm --enable-libglslang --enable-libgme --enable-libass --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libmfx --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librav1e --disable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libvidstab --disable-libx264 --disable-libx265 --disable-libxavs2 --disable-libxvid --enable-libzimg --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp&#xA;  libavutil      57.  0.100 / 57.  0.100&#xA;  libavcodec     59.  1.100 / 59.  1.100&#xA;  libavformat    59.  2.101 / 59.  2.101&#xA;  libavdevice    59.  0.100 / 59.  0.100&#xA;  libavfilter     8.  0.101 /  8.  0.101&#xA;  libswscale      6.  0.100 /  6.  0.100&#xA;  libswresample   4.  0.100 /  4.  0.100&#xA;Input #0, image2, from &#x27;frames/%04d.png&#x27;:&#xA;  Duration: 00:00:00.67, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0: Video: png, rgba(pc), 816x624 [SAR 2835:2835 DAR 17:13], 30 fps, 30 tbr, 30 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (png (native) -> vp9 (libvpx-vp9))&#xA;Press [q] to stop, [?] for help&#xA;[libvpx-vp9 @ 000001c97b52b000] v1.10.0&#xA;Output #0, webm, to &#x27;out.webm&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf59.2.101&#xA;  Stream #0:0: Video: vp9, yuv420p(tv, progressive), 816x624 [SAR 1:1 DAR 17:13], q=2-31, 1307 kb/s, 30 fps, 1k tbn&#xA;    Metadata:&#xA;      alpha_mode      : 1&#xA;      encoder         : Lavc59.1.100 libvpx-vp9&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;frame=   20 fps=0.0 q=0.0 Lsize=       2kB time=00:00:00.63 bitrate=  21.5kbits/s speed=1.09x&#xA;video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 68.316833%&#xA;

    &#xA;

    an playback of the webm file in vlc shows something totaly different, the area of the cross is filled in its color and the rest is white where it all should be black and transparent if possible (yes i know in vlc it's not possible...) :
    &#xA;out.webm in vlc shows an block instead of the a green or blue cross

    &#xA;

    It would be nice to have an ffmpeg command that does produce the expected result (webm video showing a cross on black/transparent background)

    &#xA;

    PS : Related to the Title of this post : I don't know how this issue is called, so feel free to edit the Title, im out of ideas.

    &#xA;

  • Upsample and encode audio stream

    8 juin 2022, par Anton Issaikin

    Basically after transcoding pcm_alaw 8khz to mp3, I can hear only some brief or even swift sound in first 1-2 seconds, unrecognizable sound. So something is wrong with pts/dts, packed to planar convertion, or upsampling.

    &#xA;

    My application does transcoding rtsp camera stream to file. Video and audio. Video works fine and audio remuxing as well. Now I have pcm_alaw 8khz audio stream and want to transcode it to mp4 file along with video.

    &#xA;

    Code is quite cumbersome to construct reproducible part, so firstly I want to know if my logic is right. Here is my draft process (assume all error are checked and handled) :

    &#xA;

    create encoder :

    &#xA;

        codec_ = avcodec_find_encoder(AV_CODEC_ID_MP3);&#xA;&#xA;    enc_ctx_ = avcodec_alloc_context3(codec_);&#xA;&#xA;    enc_ctx_->bit_rate = 64000;&#xA;    enc_ctx_->codec_type = AVMEDIA_TYPE_AUDIO;&#xA;&#xA;    enc_ctx_->sample_fmt   = codec_->sample_fmts ? codec_->sample_fmts[0] : AV_SAMPLE_FMT_S32P;&#xA;&#xA;    // functions from here https://www.ffmpeg.org/doxygen/4.1/encode_audio_8c-example.html&#xA;    enc_ctx_->sample_rate    = select_sample_rate(codec_);&#xA;    enc_ctx_->channel_layout = select_channel_layout(codec_);&#xA;    enc_ctx_->channels       = av_get_channel_layout_nb_channels(enc_ctx_->channel_layout);&#xA;    enc_ctx_->time_base = (AVRational){1, enc_ctx_->sample_rate};&#xA;    enc_ctx_->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;&#xA;&#xA;    if (is_global_header) {&#xA;        enc_ctx_->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    avcodec_open2(enc_ctx_, codec_, nullptr);&#xA;

    &#xA;

    create resampler (in_frame) :

    &#xA;

        audio_fifo_ = av_audio_fifo_alloc(enc_ctx_->sample_fmt, enc_ctx_->channels, 1));&#xA;       &#xA;    in_ch_layout_ = in_frame->channel_layout;&#xA;    in_sample_fmt = in_frame->format;&#xA;    in_sample_rate_ = in_frame->sample_rate;&#xA;&#xA;    swr_ctx_ = swr_alloc_set_opts(NULL,                       // we&#x27;re allocating a new context&#xA;                             enc_ctx_->channel_layout,        // out_ch_layout&#xA;                             enc_ctx_->sample_fmt,            // out_sample_fmt&#xA;                             enc_ctx_->sample_rate,           // out_sample_rate&#xA;                             in_frame->channel_layout,        // in_ch_layout&#xA;                             (AVSampleFormat)in_frame->format, // in_sample_fmt&#xA;                             in_frame->sample_rate,            // in_sample_rate&#xA;                             0,                                // log_offset&#xA;                             NULL);                            // log_ctx&#xA;                             &#xA;    swr_init(swr_ctx_);&#xA;

    &#xA;

    resample (in_frame, start_pts, start_dts) :

    &#xA;

        auto resampled_frame = av_frame_alloc();&#xA;&#xA;    auto dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx_, in_frame->sample_rate) &#x2B;&#xA;                                    in_frame->nb_samples, enc_ctx_->sample_rate, in_frame->sample_rate, AV_ROUND_UP);&#xA;&#xA;    // resampled_frame->nb_samples     = dst_nb_samples;&#xA;    resampled_frame->format         = enc_ctx_->sample_fmt;&#xA;    resampled_frame->channel_layout = enc_ctx_->channel_layout;&#xA;    // resampled_frame->channels       = enc_ctx_->channels;&#xA;    resampled_frame->sample_rate    = enc_ctx_->sample_rate;&#xA;&#xA;    error = swr_convert_frame(swr_ctx_, resampled_frame, in_frame);&#xA;&#xA;    /* Make the FIFO as large as it needs to be to hold both,&#xA;     * the old and the new samples. */&#xA;    if (av_audio_fifo_size(audio_fifo_) &lt; dst_nb_samples) {&#xA;        av_audio_fifo_realloc(audio_fifo_, dst_nb_samples);&#xA;    }&#xA;&#xA;    /* Store the new samples in the FIFO buffer. */&#xA;    auto nb_samples = av_audio_fifo_write(audio_fifo_,&#xA;                                          reinterpret_cast<void>(resampled_frame->extended_data),&#xA;                                          resampled_frame->nb_samples);&#xA;&#xA;&#xA;    int delay = 0;&#xA;    // trying to split resampled frame to desired chunks&#xA;    while (av_audio_fifo_size(audio_fifo_) > 0) {&#xA;        const int frame_size = FFMIN(av_audio_fifo_size(audio_fifo_), enc_ctx_->frame_size);&#xA;&#xA;        auto out_frame = av_frame_alloc();&#xA;&#xA;&#xA;        out_frame->nb_samples       = frame_size;&#xA;        out_frame->format           = enc_ctx_->sample_fmt;&#xA;        out_frame->channel_layout   = enc_ctx_->channel_layout;&#xA;        out_frame->channels         = enc_ctx_->channels;&#xA;        out_frame->sample_rate      = enc_ctx_->sample_rate;&#xA;&#xA;        av_frame_get_buffer(out_frame, 0);&#xA;        &#xA;        av_audio_fifo_read(audio_fifo_, (void **)out_frame->data, frame_size) &lt; frame_size);&#xA;&#xA;    // ***** tried both cases&#xA;        out_frame->pts = in_frame->pts &#x2B; delay;&#xA;        out_frame->pkt_dts = in_frame->pkt_dts &#x2B; delay;&#xA;        // swr_next_pts(swr_ctx_, in_frame->pts) &#x2B; delay;&#xA;        // swr_next_pts(swr_ctx_, in_frame->pkt_dts) &#x2B; delay;&#xA;&#xA;        result.push_back(out_frame);&#xA;&#xA;        delay &#x2B;= frame_size;&#xA;    }&#xA;&#xA;    return result;&#xA;</void>

    &#xA;

    encoding and muxing (in_frame) :

    &#xA;

        bool DoesNeedResample(const AVFrame * in_frame) {&#xA;        assert(("DoesNeedResample: in_frame is empty", in_frame));&#xA;        assert(("DoesNeedResample: encoder is not started", is_init_));&#xA;&#xA;        if (in_frame->sample_rate != enc_ctx_->sample_rate ||&#xA;        in_frame->channel_layout != enc_ctx_->channel_layout ||&#xA;        in_frame->channels != enc_ctx_->channels ||&#xA;        in_frame->format != enc_ctx_->sample_fmt) {&#xA;        return true;&#xA;        }&#xA;&#xA;        return false;&#xA;    }&#xA;&#xA;    av_frame_make_writable(in_frame);&#xA;&#xA;&#xA;    streamserver::AVFrames encoding_frames;&#xA;    if (DoesNeedResample(in_frame)) {&#xA;        encoding_frames = Resample(in_frame, &#xA;        av_rescale_q(in_frame->pts, in_audio_stream_timebase_, out_audio_stream_->time_base),&#xA;        av_rescale_q(in_frame->pkt_dts, in_audio_stream_timebase_, out_audio_stream_->time_base));&#xA;    } else {&#xA;        encoding_frames.push_back(av_frame_clone(in_frame));&#xA;    }&#xA;&#xA;&#xA;    for (auto frame : encoding_frames) {&#xA;        if ((err = avcodec_send_frame(encoder_ctx, frame)) &lt; 0) {&#xA;            AVFrameFree(&amp;frame);&#xA;        }&#xA;&#xA;        while (err >= 0) {&#xA;            pkt_->data = NULL;&#xA;            pkt_->size = 0;&#xA;            av_init_packet(pkt_);&#xA;&#xA;            err = avcodec_receive_packet(encoder_ctx, pkt_);&#xA;            if (err == AVERROR(EAGAIN) || err == AVERROR_EOF) {&#xA;                break;&#xA;            } else if (err &lt; 0) {&#xA;                break;&#xA;            }&#xA;&#xA;            pkt_->stream_index = out_audio_stream_->index;&#xA;&#xA;            av_interleaved_write_frame(ofmt_ctx_, pkt_);&#xA;        }&#xA;&#xA;        av_packet_unref(pkt_);&#xA;    }&#xA;

    &#xA;

    Sound in resulted video is corrupted, see first paragraph for description.

    &#xA;

    In https://www.ffmpeg.org/doxygen/4.1/transcode_aac_8c-example.html&#xA;there are lines :

    &#xA;

            /*&#xA;        * Perform a sanity check so that the number of converted samples is&#xA;        * not greater than the number of samples to be converted.&#xA;        * If the sample rates differ, this case has to be handled differently&#xA;        */&#xA;        av_assert0(output_codec_context->sample_rate == input_codec_context->sample_rate);&#xA;

    &#xA;

    How to handle such cases ? I tried to split resampled frames via fifo in example above !

    &#xA;