Recherche avancée

Médias (91)

Autres articles (13)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (4248)

  • FFMpeg Concatenation Filters : Stream specifier ':0' in filtergraph matches no streams

    8 décembre 2018, par Anthony Eden

    I am developing an application that relies heavily on FFMpeg to perform various transformations on audio files. I am currently testing my FFMpeg configuration on the command line.

    I am trying to concatenate multiple audio files which are in different formats (Primarily MP3, MP2 & WAV). I have been using the official TRAC documentation (https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join%2C%20merge)%20media%20files#differentcodec) to help me with this and have created the following command :

    ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav

    However, when I run this on Mac OS X using version 2.0.1 of FFMpeg, I get the following error message :

    Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.

    Here is my full output from the terminal :

    ~/ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav

    ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
     built on Aug 15 2013 10:56:46 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
     configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --arch=x86_64 --enable-runtime-cpudetect
     libavutil      52. 38.100 / 52. 38.100
     libavcodec     55. 18.102 / 55. 18.102
     libavformat    55. 12.100 / 55. 12.100
     libavdevice    55.  3.100 / 55.  3.100
     libavfilter     3. 79.101 /  3. 79.101
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    Guessed Channel Layout for  Input Stream #0.0 : stereo
    Input #0, wav, from 'OHIn.wav':
     Duration: 00:00:06.71, bitrate: 1411 kb/s
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Guessed Channel Layout for  Input Stream #1.0 : stereo
    Input #1, wav, from 'OHOut.wav':
     Duration: 00:00:07.19, bitrate: 1411 kb/s
       Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.

    I do not understand why this does not work. FFMpeg shows that the streams 0:0 and 1:0 exist in the source files. The only other similar problems online have surrounded the use of the single quote in Windows, however testing of this confirm it does not apply to my Mac command line.

    Any help would be much appreciated.

  • FFmpeg : "Invalid data found when processing input" when reading video from memory

    24 avril 2020, par Drawoceans

    I'm trying to read a mp4 video file from memory with C++ and FFmpeg library, but I got "Invalid data found when processing input" error. Here are my codes :

    



    #include <cstdio>&#xA;#include <fstream>&#xA;#include <filesystem>&#xA;&#xA;extern "C"&#xA;{&#xA;#include "libavformat/avformat.h"&#xA;#include "libavformat/avio.h"&#xA;}&#xA;&#xA;using namespace std;&#xA;namespace fs = std::filesystem;&#xA;&#xA;struct VideoBuffer&#xA;{&#xA;    uint8_t* ptr;&#xA;    size_t size;&#xA;};&#xA;&#xA;static int read_packet(void* opaque, uint8_t* buf, int buf_size)&#xA;{&#xA;    VideoBuffer* vb = (VideoBuffer*)opaque;&#xA;    buf_size = FFMIN(buf_size, vb->size);&#xA;&#xA;    if (!buf_size) {&#xA;        return AVERROR_EOF;&#xA;    }&#xA;&#xA;    printf("ptr:%p size:%zu\n", vb->ptr, vb->size);&#xA;&#xA;    memcpy(buf, vb->ptr, buf_size);&#xA;    vb->ptr &#x2B;= buf_size;&#xA;    vb->size -= buf_size;&#xA;&#xA;    return buf_size;&#xA;}&#xA;&#xA;void print_ffmpeg_error(int ret)&#xA;{&#xA;    char* err_str = new char[256];&#xA;    av_strerror(ret, err_str, 256);&#xA;    printf("%s\n", err_str);&#xA;    delete[] err_str;&#xA;}&#xA;&#xA;int main()&#xA;{&#xA;    fs::path video_path = "test.mp4";&#xA;    ifstream video_file;&#xA;    video_file.open(video_path);&#xA;    if (!video_file) {&#xA;        abort();&#xA;    }&#xA;    size_t video_size = fs::file_size(video_path);&#xA;    char* video_ptr = new char[video_size];&#xA;    video_file.read(video_ptr, video_size);&#xA;    video_file.close();&#xA;&#xA;    VideoBuffer vb;&#xA;    vb.ptr = (uint8_t*)video_ptr;&#xA;    vb.size = video_size;&#xA;&#xA;    AVIOContext* avio = nullptr;&#xA;    uint8_t* avio_buffer = nullptr;&#xA;    size_t avio_buffer_size = 4096;&#xA;    avio_buffer = (uint8_t*)av_malloc(avio_buffer_size);&#xA;    if (!avio_buffer) {&#xA;        abort();&#xA;    }&#xA;&#xA;    avio = avio_alloc_context(avio_buffer, avio_buffer_size, 0, &amp;vb, read_packet, nullptr, nullptr);&#xA;&#xA;    AVFormatContext* fmt_ctx = avformat_alloc_context();&#xA;    if (!fmt_ctx) {&#xA;        abort();&#xA;    }&#xA;    fmt_ctx->pb = avio;&#xA;&#xA;    int ret = 0;&#xA;    ret = avformat_open_input(&amp;fmt_ctx, nullptr, nullptr, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        print_ffmpeg_error(ret);&#xA;    }&#xA;&#xA;    avformat_close_input(&amp;fmt_ctx);&#xA;    av_freep(&amp;avio->buffer);&#xA;    av_freep(&amp;avio);&#xA;    delete[] video_ptr;&#xA;    return 0;&#xA;}&#xA;</filesystem></fstream></cstdio>

    &#xA;&#xA;

    And here is what I got :

    &#xA;&#xA;

    ptr:000001E10CEA0070 size:4773617&#xA;ptr:000001E10CEA1070 size:4769521&#xA;...&#xA;ptr:000001E10D32D070 size:1777&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 000001e10caaeac0] moov atom not found&#xA;Invalid data found when processing input&#xA;

    &#xA;&#xA;

    FFmpeg version is 4.2.2, with Windows 10 and Visual Studio 2019 in x64 Debug mode. FFmpeg library is the Windows compiled shared library from FFmpeg homepage. Some codes are from official example avio_reading.c. Target MP4 file can be played normally by VLC player so I think the file is OK. Is anywhere wrong in my codes ? Or is it an FFmpeg library problem ?

    &#xA;

  • FFMPEG:av_rescale_q - time_base difference

    2 décembre 2020, par Michael IV

    I want to know once and for all, how time base calucaltion and rescaling works in FFMPEG. &#xA;Before getting to this question I did some research and found many controversial answers, which make it even more confusing.&#xA;So based on official FFMPEG examples one has to

    &#xA;&#xA;

    &#xA;

    rescale output packet timestamp values from codec to stream timebase

    &#xA;

    &#xA;&#xA;

    with something like this :

    &#xA;&#xA;

    pkt->pts = av_rescale_q_rnd(pkt->pts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);&#xA;pkt->dts = av_rescale_q_rnd(pkt->dts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);&#xA;pkt->duration = av_rescale_q(pkt->duration, *time_base, st->time_base);&#xA;

    &#xA;&#xA;

    But in this question a guy was asking similar question to mine, and he gave more examples, each of them doing it differently. And contrary to the answer which says that all those ways are fine, for me only the following approach works :

    &#xA;&#xA;

    frame->pts &#x2B;= av_rescale_q(1, video_st->codec->time_base, video_st->time_base);&#xA;

    &#xA;&#xA;

    In my application I am generating video packets (h264) at 60 fps outside FFMPEG API then write them into mp4 container.

    &#xA;&#xA;

    I set explicitly :

    &#xA;&#xA;

    video_st->time_base = {1,60};&#xA;video_st->r_frame_rate = {60,1};&#xA;video_st->codec->time_base = {1 ,60};&#xA;

    &#xA;&#xA;

    The first weird thing I see happens right after I have written header for the output format context :

    &#xA;&#xA;

    AVDictionary *opts = nullptr;&#xA;int ret = avformat_write_header(mOutputFormatContext, &amp;opts);&#xA;av_dict_free(&amp;opts);&#xA;

    &#xA;&#xA;

    After that ,video_st->time_baseis populated with :

    &#xA;&#xA;

    num = 1;&#xA;den = 15360&#xA;

    &#xA;&#xA;

    And I fail to understand why.

    &#xA;&#xA;

    I want someone please to exaplain me that.Next, before writing frame I calculate&#xA;PTS for the packet. In my case PTS = DTS as I don't use B-frames at all.

    &#xA;&#xA;

    And I have to do this :

    &#xA;&#xA;

     const int64_t duration = av_rescale_q(1, video_st->codec->time_base, video_st->time_base);&#xA; totalPTS &#x2B;= duration; //totalPTS is global variable&#xA; packet->pts = totalPTS ;&#xA; packet->dts = totalPTS ;&#xA; av_write_frame(mOutputFormatContext, mpacket);&#xA;

    &#xA;&#xA;

    I don't get it,why codec and stream have different time_base values even though I explicitly set those to be the same. And because I see across all the examples that av_rescale_q is always used to calculate duration I really want someone to explain this point.

    &#xA;&#xA;

    Additionally, as a comparison, and for the sake of experiment, I decided to try writing stream for WEBM container. So I don't use libav output stream at all.&#xA;I just grab the same packet I use to encode MP4 and write it manually into EBML stream. In this case I calculate duration like this :

    &#xA;&#xA;

     const int64_t duration =&#xA; ( video_st->codec->time_base.num / video_st->codec->time_base.den) * 1000;&#xA;

    &#xA;&#xA;

    Multiplication by 1000 is required for WEBM as the time stamps are presented in milliseconds in that container.And this works. So why in case of MP4 stream encoding there is a difference in time_base which has to be rescaled ?

    &#xA;