Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (58)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (6074)

  • merging 2 editing videos without saving them using ffmpeg

    14 avril 2021, par Eswar T

    I want to combine 2 videos assume even they are edited among themselves

    


    How normally we do

    


    video 1 :

    


    ffmpeg -i 1.mp4 -filter:a "volume=0.0" test1.mp4


    


    video 2 :

    


    ffmpeg -i 2.mp4 -filter:a "volume=10.0" test2.mp4  


    


    now I can combine them using

    


    ffmpeg -i test1.mp4 -i test2.mp4 -filter_complex [0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa] -map [outv] -map [outa] out.put.mp4


    


    So my question is, Is there a way to make this 3 steps process into 1 step and without saving files of step 1 and step 2

    


    I do know that we can combine into one using && but my main query is there a way to do without saving the files of video1 and video2 that edited files

    


    Hope I'm a bit clear with my query

    


    Question edited/added :

    


    ffmpeg -i test.mp4 -filter:a "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS" -s 640x480 test.mkv 


    


    can we do all these options also in the merge command(operations like change video Speed, resolution, rotation, framerate, and trim) ?

    


  • How do I use FFMPEG/libav to access the data in individual audio samples ?

    15 octobre 2022, par Breadsnshreds

    The end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.

    


    I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.

    


    Based on the example above, I believe the logic is as follows :

    


      

    1. get the formatting info of the audio file
    2. 


    3. get audio stream info from the format
    4. 


    5. check that the codec required for the stream is an audio codec
    6. 


    7. get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
    8. 


    9. create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
    10. 


    11. the format reads the first frame from the audio file into our packet
    12. 


    13. pass that packet into the codec context to be decoded
    14. 


    15. pass our frame to the codec context to receive the uncompressed audio data of the first frame
    16. 


    17. create a buffer to hold the values and try allocating samples to it from our frame
    18. 


    


    From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.

    


    I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :

    


    // Step 1 - get the formatting info of the audio file
    AVFormatContext* format = avformat_alloc_context();
    if (avformat_open_input(&format, path.toStdString().c_str(), NULL, NULL) != 0) {
        qDebug() << "Could not open file " << path;
        return -1;
    }

// Step 2 - get audio stream info from the format
    if (avformat_find_stream_info(format, NULL) < 0) {
        qDebug() << "Could not retrieve stream info from file " << path;
        return -1;
    }

// Step 3 - check that the codec required for the stream is an audio codec
    int stream_index =- 1;
    for (unsigned int i=0; inb_streams; i++) {
        if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
            stream_index = i;
            break;
        }
    }

    if (stream_index == -1) {
        qDebug() << "Could not retrieve audio stream from file " << path;
        return -1;
    }

// Step 4 -get the codec context
    const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);
    AVCodecContext *codecContext = avcodec_alloc_context3(codec);
    avcodec_open2(codecContext, codec, NULL);

// Step 5 - create an empty packet and frame to use
    AVPacket *packet = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();

// Step 6 - the format reads the first frame from the audio file into our packet
    av_read_frame(format, packet);
// Step 7 - pass that packet into the codec context to be decoded
    avcodec_send_packet(codecContext, packet);
//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame
    avcodec_receive_frame(codecContext, frame);

// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame
    double *buffer;
    av_samples_alloc((uint8_t**) &buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);
    qDebug () << "packet: " << &packet;
    qDebug() << "frame: " <<  frame;
    qDebug () << "buffer: " << buffer;


    


    For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.

    


  • Revision c8ed36432e : Non-uniform quantization experiment This framework allows lower quantization bi

    4 mars 2015, par Deb Mukherjee

    Changed Paths :
     Modify /configure


     Modify /vp9/common/vp9_blockd.h


     Modify /vp9/common/vp9_onyxc_int.h


     Modify /vp9/common/vp9_quant_common.c


     Modify /vp9/common/vp9_quant_common.h


     Modify /vp9/common/vp9_rtcd_defs.pl


     Modify /vp9/decoder/vp9_decodeframe.c


     Modify /vp9/decoder/vp9_detokenize.c


     Modify /vp9/encoder/vp9_block.h


     Modify /vp9/encoder/vp9_encodemb.c


     Modify /vp9/encoder/vp9_encodemb.h


     Modify /vp9/encoder/vp9_quantize.c


     Modify /vp9/encoder/vp9_quantize.h


     Modify /vp9/encoder/vp9_rdopt.c



    Non-uniform quantization experiment

    This framework allows lower quantization bins to be shrunk down or
    expanded to match closer the source distribution (assuming a generalized
    gaussian-like central peaky model for the coefficients) in an
    entropy-constrained sense. Specifically, the width of the bins 0-4 are
    modified as a factor of the nominal quantization step size and from 5
    onwards all bins become the same as the nominal quantization step size.
    Further, different bin width profiles as well as reconstruction values
    can be used based on the coefficient band as well as the quantization step
    size divided into 5 ranges.

    A small gain currently on derflr of about 0.16% is observed with the
    same paraemters for all q values.
    Optimizing the parameters based on qstep value is left as a TODO for now.

    Results on derflr with all expts on is +6.08% (up from 5.88%).

    Experiments are in progress to tune the parameters for different
    coefficient bands and quantization step ranges.

    Change-Id : I88429d8cb0777021bfbb689ef69b764eafb3a1de