Recherche avancée

Médias (1)

Mot : - Tags -/publier

Autres articles (72)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

Sur d’autres sites (7677)

  • Node.js Stream Mp3 to http without having to save file

    10 janvier 2019, par user2758113

    I am trying to stream just audio from a youtube link straight to http with node.js.

    My code looks like this, I am using express 4.0.

    var express = require('express');
    var router = express.Router();
    var ytdl = require('ytdl');
    var ffmpeg = require('fluent-ffmpeg');
    var fs = require('fs');

    router.get('/', function(req, res) {

     var url = 'https://www.youtube.com/watch?v=GgcHlZsOgQo';
     var video = ytdl(url)

     res.set({
         "Content-Type": "audio/mpeg"
     })

     new ffmpeg({source: video})
         .toFormat('mp3')
         .writeToStream(res, function(data, err) {
           if (err) console.log(err)
         })

    });

    module.exports = router;

    Now, I’m able to stream the video’s audio to the response if I save the file then pipe it to the response, but I’d rather try to figure out some way to go from downloading to ffmpeg to response.

    Not sure if this is possible. The main goal is to keep it as light weight as possible, and not have to read from files.

    I’ve seen this code which is essentially what I’d like to do minus the saving to a file part.

    part of the error

  • FFmpeg - Putting segments of same video together

    11 juin 2020, par parthlr

    I am trying to take different segments of the same video and put them together in a new video, essentially cutting out the parts in between the segments. I have built on the answer to this question that I asked before to try and do this. I figured that with putting together segments of the same video, I would have to subtract the first dts of the segment in order for it to start perfectly after the previous segment.

    &#xA;&#xA;

    However, when I attempt to do this, I once again get the error Application provided invalid, non monotonically increasing dts to muxer in stream 0. This error is both for stream 0 and 1 (video and audio). It seems that I receive this error for only the first packet in each segment.

    &#xA;&#xA;

    On top of that, the output file plays the segments in the correct order, but the video freezes for about a second when there is a transition from one segment to the next. I have a feeling that this is because the dts of each packet is not set properly and as a result the segment is encoded about a second after it should be.

    &#xA;&#xA;

    This is the code that I have written :

    &#xA;&#xA;

    Video and ClipSequence structs :

    &#xA;&#xA;

    typedef struct Video {&#xA;    char* filename;&#xA;    AVFormatContext* inputContext;&#xA;    AVFormatContext* outputContext;&#xA;    AVCodec* videoCodec;&#xA;    AVCodec* audioCodec;&#xA;    AVStream* inputStream;&#xA;    AVStream* outputStream;&#xA;    AVCodecContext* videoCodecContext_I; // Input&#xA;    AVCodecContext* audioCodecContext_I; // Input&#xA;    AVCodecContext* videoCodecContext_O; // Output&#xA;    AVCodecContext* audioCodecContext_O; // Output&#xA;    int videoStream;&#xA;    int audioStream;&#xA;    SwrContext* swrContext;&#xA;} Video;&#xA;&#xA;typedef struct ClipSequence {&#xA;    VideoList* videos;&#xA;    AVFormatContext* outputContext;&#xA;    AVStream* outputStream;&#xA;    int64_t lastpts, lastdts;&#xA;    int64_t currentpts, currentdts;&#xA;} ClipSequence;&#xA;

    &#xA;&#xA;

    Decoding and encoding (same for audio) :

    &#xA;&#xA;

    int decodeVideoSequence(ClipSequence* sequence, Video* video, AVPacket* packet) {&#xA;    int response = avcodec_send_packet(video->videoCodecContext_I, packet);&#xA;    if (response &lt; 0) {&#xA;        printf("[ERROR] Failed to send video packet to decoder\n");&#xA;        return response;&#xA;    }&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    while (response >= 0) {&#xA;        response = avcodec_receive_frame(video->videoCodecContext_I, frame);&#xA;        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {&#xA;            break;&#xA;        } else if (response &lt; 0) {&#xA;            printf("[ERROR] Failed to receive video frame from decoder\n");&#xA;            return response;&#xA;        }&#xA;        if (response >= 0) {&#xA;            // Do stuff and encode&#xA;&#xA;            // Subtract first dts from the current dts&#xA;            sequence->v_currentdts = packet->dts - sequence->v_firstdts;&#xA;&#xA;            if (encodeVideoSequence(sequence, video, frame) &lt; 0) {&#xA;                printf("[ERROR] Failed to encode new video\n");&#xA;                return -1;&#xA;            }&#xA;        }&#xA;        av_frame_unref(frame);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;int encodeVideoSequence(ClipSequence* sequence, Video* video, AVFrame* frame) {&#xA;    AVPacket* packet = av_packet_alloc();&#xA;    if (!packet) {&#xA;        printf("[ERROR] Could not allocate memory for video output packet\n");&#xA;        return -1;&#xA;    }&#xA;    int response = avcodec_send_frame(video->videoCodecContext_O, frame);&#xA;    if (response &lt; 0) {&#xA;        printf("[ERROR] Failed to send video frame for encoding\n");&#xA;        return response;&#xA;    }&#xA;    while (response >= 0) {&#xA;        response = avcodec_receive_packet(video->videoCodecContext_O, packet);&#xA;        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {&#xA;            break;&#xA;        } else if (response &lt; 0) {&#xA;            printf("[ERROR] Failed to receive video packet from encoder\n");&#xA;            return response;&#xA;        }&#xA;        // Update dts and pts of video&#xA;        packet->duration = VIDEO_PACKET_DURATION;&#xA;        int64_t cts = packet->pts - packet->dts;&#xA;        packet->dts = sequence->v_currentdts &#x2B; sequence->v_lastdts &#x2B; packet->duration;&#xA;        packet->pts = packet->dts &#x2B; cts;&#xA;        packet->stream_index = video->videoStream;&#xA;        response = av_interleaved_write_frame(sequence->outputContext, packet);&#xA;        if (response &lt; 0) {&#xA;            printf("[ERROR] Failed to write video packet\n");&#xA;            break;&#xA;        }&#xA;    }&#xA;    av_packet_unref(packet);&#xA;    av_packet_free(&amp;packet);&#xA;    return 0;&#xA;}&#xA;

    &#xA;&#xA;

    Cutting the video from a specific range of frames :

    &#xA;&#xA;

    int cutVideo(ClipSequence* sequence, Video* video, int startFrame, int endFrame) {&#xA;    printf("[WRITE] Cutting video from frame %i to %i\n", startFrame, endFrame);&#xA;    // Seeking stream is set to 0 by default and for testing purposes&#xA;    if (findPacket(video->inputContext, startFrame, 0) &lt; 0) {&#xA;        printf("[ERROR] Failed to find packet\n");&#xA;    }&#xA;    AVPacket* packet = av_packet_alloc();&#xA;    if (!packet) {&#xA;        printf("[ERROR] Could not allocate packet for cutting video\n");&#xA;        return -1;&#xA;    }&#xA;    int currentFrame = startFrame;&#xA;    bool v_firstframe = true;&#xA;    bool a_firstframe = true;&#xA;    while (av_read_frame(video->inputContext, packet) >= 0 &amp;&amp; currentFrame &lt;= endFrame) {&#xA;        if (packet->stream_index == video->videoStream) {&#xA;            // Only count video frames since seeking is based on 60 fps video frames&#xA;            currentFrame&#x2B;&#x2B;;&#xA;            // Store the first dts&#xA;            if (v_firstframe) {&#xA;                v_firstframe = false;&#xA;                sequence->v_firstdts = packet->dts;&#xA;            }&#xA;            if (decodeVideoSequence(sequence, video, packet) &lt; 0) {&#xA;                printf("[ERROR] Failed to decode and encode video\n");&#xA;                return -1;&#xA;            }&#xA;        } else if (packet->stream_index == video->audioStream) {&#xA;            if (a_firstframe) {&#xA;                a_firstframe = false;&#xA;                sequence->a_firstdts = packet->dts;&#xA;            }&#xA;            if (decodeAudioSequence(sequence, video, packet) &lt; 0) {&#xA;                printf("[ERROR] Failed to decode and encode audio\n");&#xA;                return -1;&#xA;            }&#xA;        }&#xA;        av_packet_unref(packet);&#xA;    }&#xA;    sequence->v_lastdts &#x2B;= sequence->v_currentdts;&#xA;    sequence->a_lastdts &#x2B;= sequence->a_currentdts;&#xA;    return 0;&#xA;}&#xA;

    &#xA;&#xA;

    Finding correct place in video to start :

    &#xA;&#xA;

    int findPacket(AVFormatContext* inputContext, int frameIndex, int stream) {&#xA;    int64_t timebase;&#xA;    if (stream &lt; 0) {&#xA;        timebase = AV_TIME_BASE;&#xA;    } else if (stream >= 0) {&#xA;        timebase = (inputContext->streams[stream]->time_base.den) / inputContext->streams[stream]->time_base.num;&#xA;    }&#xA;    int64_t seekTarget = timebase * frameIndex / VIDEO_DEFAULT_FPS;&#xA;    if (av_seek_frame(inputContext, stream, seekTarget, AVSEEK_FLAG_ANY) &lt; 0) {&#xA;        printf("[ERROR] Failed to find keyframe from frame index %i\n", frameIndex);&#xA;        return -1;&#xA;    }&#xA;    return 0;&#xA;}&#xA;

    &#xA;&#xA;

    UPDATE :

    &#xA;&#xA;

    I have achieved the desired result, but not in the way that I wanted to. I took each segment and encoded them to a separate video file. Then, I took those separate videos and encoded them into one sequence of videos. However, this isn't the optimal method of achieving what I want. It's definitely a lot slower and I have written a lot more code than I believe I should have. I still don't know what the issue is to my original problem, and I would greatly appreciate any help.

    &#xA;

  • Revision e85eaf6acd : Remove redundant mode update in sub8x8 decoding The probability model used to c

    24 septembre 2013, par Jingning Han

    Changed Paths :
     Modify /vp9/decoder/vp9_decodemv.c



    Remove redundant mode update in sub8x8 decoding

    The probability model used to code prediction mode is conditioned
    on the immediate above and left 8x8 blocks' prediction modes. When
    the above/left block is coded in sub8x8 mode, we use the prediction
    mode of the bottom-right sub8x8 block as the reference to generate
    the context.

    This commit moves the update of mbmi.mode out of the sub8x8 decoding
    loop, hence removing redundant update steps and keeping the bottom-
    right block's mode for the decoding process of next blocks.

    Change-Id : I1e8d749684d201c1a1151697621efa5d569218b6