Recherche avancée

Médias (91)

Autres articles (97)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (6681)

  • How to playback RAW video and audio in VLC ?

    24 février 2014, par Lane

    I have 2 files...

    • RAW H264 video
    • RAW PCM audio (uncompressed from PCM Mu Law)

    ...and I am looking to be able to play them in a Java application (using VLCJ possibly). I am able to run the ffmpeg command...

    • ffmpeg -i video -i audio -preset ultrafast movie.mp4

    ...to generate a mp4, but it takes 1/8 of the source length (it takes 1 min to generate a movie for 8 min of RAW data). My problem is that this is not fast enough for me, so I am trying to playback with the RAW sources. I can playback the video with the VLC command...

    • vlc video —demux=h264 (if I don't specify this flag, it doesn't work)

    ...and it plays correctly, but gives me the error...

    [0x10028bbe0] main interface error : no suitable interface module
    [0x10021d4a0] main libvlc : Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
    [0x10aa14950] h264 demux error : this doesn't look like a H264 ES stream, continuing anyway
    [0x1003ccb50] main input error : Invalid PCR value in ES_OUT_SET_(GROUP_)PCR !
    shader program 1 : WARNING : Output of vertex shader 'TexCoord1' not read by fragment shader
    WARNING : Output of vertex shader 'TexCoord2' not read by fragment shader

    ...similarly, I can play the RAW audio with the VLC command...

    • vlc audio (note that I do not need to specify the —demux flag)

    ...so, what I am looking for is...

    1. How to playback the RAW audio and video together using the VLC CLI ?
    2. Recommendations for a Java Application solution ?

    ...thanks !

  • Play stream on NGinx at certain time

    6 juin 2020, par Edius

    I have VDS with NGinx installed for WebTV. I need to start playing the mp4 file at 8:00AM to stream it via RTMP. How should I change my nginx.conf ? Now it plays file from start point when user press "Play" button. But I need that user see the current point of stream like on TV. My config :

    



        server {
        listen 1935;
        chunk_size 4000;

        play_time_fix off;
        interleave on;
        publish_time_fix on;

        application app {
            live on;            
            exec_play ffmpeg -re -stream_loop 2 -i /var/www/html/video/video.mp4 -c copy -f flv rtmp://.../app/stream;
        }       


    


  • FFmpeg - Putting segments of same video together

    11 juin 2020, par parthlr

    I am trying to take different segments of the same video and put them together in a new video, essentially cutting out the parts in between the segments. I have built on the answer to this question that I asked before to try and do this. I figured that with putting together segments of the same video, I would have to subtract the first dts of the segment in order for it to start perfectly after the previous segment.

    



    However, when I attempt to do this, I once again get the error Application provided invalid, non monotonically increasing dts to muxer in stream 0. This error is both for stream 0 and 1 (video and audio). It seems that I receive this error for only the first packet in each segment.

    



    On top of that, the output file plays the segments in the correct order, but the video freezes for about a second when there is a transition from one segment to the next. I have a feeling that this is because the dts of each packet is not set properly and as a result the segment is encoded about a second after it should be.

    



    This is the code that I have written :

    



    Video and ClipSequence structs :

    



    typedef struct Video {
    char* filename;
    AVFormatContext* inputContext;
    AVFormatContext* outputContext;
    AVCodec* videoCodec;
    AVCodec* audioCodec;
    AVStream* inputStream;
    AVStream* outputStream;
    AVCodecContext* videoCodecContext_I; // Input
    AVCodecContext* audioCodecContext_I; // Input
    AVCodecContext* videoCodecContext_O; // Output
    AVCodecContext* audioCodecContext_O; // Output
    int videoStream;
    int audioStream;
    SwrContext* swrContext;
} Video;

typedef struct ClipSequence {
    VideoList* videos;
    AVFormatContext* outputContext;
    AVStream* outputStream;
    int64_t lastpts, lastdts;
    int64_t currentpts, currentdts;
} ClipSequence;


    



    Decoding and encoding (same for audio) :

    



    int decodeVideoSequence(ClipSequence* sequence, Video* video, AVPacket* packet) {
    int response = avcodec_send_packet(video->videoCodecContext_I, packet);
    if (response < 0) {
        printf("[ERROR] Failed to send video packet to decoder\n");
        return response;
    }
    AVFrame* frame = av_frame_alloc();
    while (response >= 0) {
        response = avcodec_receive_frame(video->videoCodecContext_I, frame);
        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
            break;
        } else if (response < 0) {
            printf("[ERROR] Failed to receive video frame from decoder\n");
            return response;
        }
        if (response >= 0) {
            // Do stuff and encode

            // Subtract first dts from the current dts
            sequence->v_currentdts = packet->dts - sequence->v_firstdts;

            if (encodeVideoSequence(sequence, video, frame) < 0) {
                printf("[ERROR] Failed to encode new video\n");
                return -1;
            }
        }
        av_frame_unref(frame);
    }
    return 0;
}

int encodeVideoSequence(ClipSequence* sequence, Video* video, AVFrame* frame) {
    AVPacket* packet = av_packet_alloc();
    if (!packet) {
        printf("[ERROR] Could not allocate memory for video output packet\n");
        return -1;
    }
    int response = avcodec_send_frame(video->videoCodecContext_O, frame);
    if (response < 0) {
        printf("[ERROR] Failed to send video frame for encoding\n");
        return response;
    }
    while (response >= 0) {
        response = avcodec_receive_packet(video->videoCodecContext_O, packet);
        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
            break;
        } else if (response < 0) {
            printf("[ERROR] Failed to receive video packet from encoder\n");
            return response;
        }
        // Update dts and pts of video
        packet->duration = VIDEO_PACKET_DURATION;
        int64_t cts = packet->pts - packet->dts;
        packet->dts = sequence->v_currentdts + sequence->v_lastdts + packet->duration;
        packet->pts = packet->dts + cts;
        packet->stream_index = video->videoStream;
        response = av_interleaved_write_frame(sequence->outputContext, packet);
        if (response < 0) {
            printf("[ERROR] Failed to write video packet\n");
            break;
        }
    }
    av_packet_unref(packet);
    av_packet_free(&packet);
    return 0;
}


    



    Cutting the video from a specific range of frames :

    



    int cutVideo(ClipSequence* sequence, Video* video, int startFrame, int endFrame) {
    printf("[WRITE] Cutting video from frame %i to %i\n", startFrame, endFrame);
    // Seeking stream is set to 0 by default and for testing purposes
    if (findPacket(video->inputContext, startFrame, 0) < 0) {
        printf("[ERROR] Failed to find packet\n");
    }
    AVPacket* packet = av_packet_alloc();
    if (!packet) {
        printf("[ERROR] Could not allocate packet for cutting video\n");
        return -1;
    }
    int currentFrame = startFrame;
    bool v_firstframe = true;
    bool a_firstframe = true;
    while (av_read_frame(video->inputContext, packet) >= 0 && currentFrame <= endFrame) {
        if (packet->stream_index == video->videoStream) {
            // Only count video frames since seeking is based on 60 fps video frames
            currentFrame++;
            // Store the first dts
            if (v_firstframe) {
                v_firstframe = false;
                sequence->v_firstdts = packet->dts;
            }
            if (decodeVideoSequence(sequence, video, packet) < 0) {
                printf("[ERROR] Failed to decode and encode video\n");
                return -1;
            }
        } else if (packet->stream_index == video->audioStream) {
            if (a_firstframe) {
                a_firstframe = false;
                sequence->a_firstdts = packet->dts;
            }
            if (decodeAudioSequence(sequence, video, packet) < 0) {
                printf("[ERROR] Failed to decode and encode audio\n");
                return -1;
            }
        }
        av_packet_unref(packet);
    }
    sequence->v_lastdts += sequence->v_currentdts;
    sequence->a_lastdts += sequence->a_currentdts;
    return 0;
}


    



    Finding correct place in video to start :

    



    int findPacket(AVFormatContext* inputContext, int frameIndex, int stream) {
    int64_t timebase;
    if (stream < 0) {
        timebase = AV_TIME_BASE;
    } else if (stream >= 0) {
        timebase = (inputContext->streams[stream]->time_base.den) / inputContext->streams[stream]->time_base.num;
    }
    int64_t seekTarget = timebase * frameIndex / VIDEO_DEFAULT_FPS;
    if (av_seek_frame(inputContext, stream, seekTarget, AVSEEK_FLAG_ANY) < 0) {
        printf("[ERROR] Failed to find keyframe from frame index %i\n", frameIndex);
        return -1;
    }
    return 0;
}


    



    UPDATE :

    



    I have achieved the desired result, but not in the way that I wanted to. I took each segment and encoded them to a separate video file. Then, I took those separate videos and encoded them into one sequence of videos. However, this isn't the optimal method of achieving what I want. It's definitely a lot slower and I have written a lot more code than I believe I should have. I still don't know what the issue is to my original problem, and I would greatly appreciate any help.