Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (37)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (6307)

  • Videos storage and streaming [closed]

    12 juillet 2021, par Tomer

    I'll do my best to describe my thought here.

    


    We came across a need to receive videos from users, store them and later on present those videos to different users upon request.
The argument is based on the idea that we can't serve one single file of the same size and quality to all clients. There must be consideration of the client's network quality.
One side claims that the best way to overcome this issue is by storing multiple versions of one file (360p, 480p, 720p etc..) and then based on the client's network quality we will estimate which file is best suited for his conditions. We shall estimate the client's network quality by testing the connection quality to the s3 servers storing our files.

    


    Second party claims that storing one file of high quality is enough. Then, upon request by client we shall stream the file to the client in the suitable encoding and quality using a third party framework (from brief research, ffmpeg, Gstreamer. Its not established yet which and how, only consider the idea.).

    


    Which is more acceptable in modern ways ? Are there any other ideas we haven't thought of ?

    


    Couple of notes. Our backend is written in node, using aws-sdk for s3 and nest for api.

    


    Thanks

    


  • Minimal sample of muxing two streams with no reencoding (av_interleaved_write_frame fails)

    19 juillet 2022, par Alvein

    What I'm trying to do : having two files, one is video-only and the other is audio-only, with identical durations, I want to "join" them in a single container.

    


    I previously made a routine which just copied all the streams inside a container to another one. No reencoding, etc. This works perfectly :

    


    while(true) {
    pkIn=av_packet_alloc();
    if(NULL==pkIn) {
        fprintf(stderr,"av_packet_alloc() failed");
        break;
    }
    iError=av_read_frame(fcIn,pkIn);
    if(0>iError)
        if(AVERROR_EOF==iError)
            break;
        else {
            fprintf(stderr,"av_read_frame() failed");
            break;
        }
    stIn=fcIn->streams[pkIn->stream_index];
    stOut=fcOut->streams[pkIn->stream_index];
    log_packet(fcIn,pkIn,"in");
    av_packet_rescale_ts(pkIn,stIn->time_base,stOut->time_base);
    pkIn->pos=-1;
    log_packet(fcOut,pkIn,"out");
    iError=av_interleaved_write_frame(fcOut,pkIn);
    if(0>iError) {
        fprintf(stderr,"av_interleaved_write_frame() failed");
        break;
    }
    av_packet_free(&pkIn);
}


    


    I just did the analogy and tried to do the same, but taking each stream from a distinct container, like this :

    


    while(true) {
    if(!bVideoInEOF) {
        pkVideoIn=av_packet_alloc();
        if(NULL==pkVideoIn) {
            fprintf(stderr,"av_packet_alloc(video in) failed");
            break;
        }
        iError=av_read_frame(fcVideoIn,pkVideoIn);
        if(0>iError)
            if(AVERROR_EOF==iError)
                bVideoInEOF=true;
            else {
                fprintf(stderr,"av_read_frame(video in) failed");
                break;
            }
        if(!bVideoInEOF) {
            log_packet(fcVideoIn,pkVideoIn,"video in");
            av_packet_rescale_ts(pkVideoIn,stVideoIn->time_base,stVideoOut->time_base);
            pkVideoIn->pos=-1;
            pkVideoIn->stream_index=stVideoOut->index; // Edit (2022-07-19)
            log_packet(fcVideoIn,pkVideoIn,"video out");
            iError=av_interleaved_write_frame(fcOut,pkVideoIn);
            if(0>iError) {
                fprintf(stderr,"av_interleaved_write_frame(video out) failed");
                break;
            }
        }
        av_packet_free(&pkVideoIn);
    }
    if(!bAudioInEOF) {
        pkAudioIn=av_packet_alloc();
        if(NULL==pkAudioIn) {
            fprintf(stderr,"av_packet_alloc(audio in) failed");
            break;
        }
        iError=av_read_frame(fcAudioIn,pkAudioIn);
        if(0>iError)
            if(AVERROR_EOF==iError)
                bAudioInEOF=true;
            else {
                fprintf(stderr,"av_read_frame(audio in) failed");
                break;
            }
        if(!bAudioInEOF) {
            log_packet(fcAudioIn,pkAudioIn,"audio in");
            av_packet_rescale_ts(pkAudioIn,stAudioIn->time_base,stAudioOut->time_base);
            pkAudioIn->pos=-1;
            pkAudioIn->stream_index=stAudioOut->index; // Edit (2022-07-19)
            log_packet(fcAudioIn,pkAudioIn,"audio out");
            iError=av_interleaved_write_frame(fcOut,pkAudioIn);
            if(0>iError) {
                fprintf(stderr,"av_interleaved_write_frame(audio out) failed");
                break;
            }
        }
        av_packet_free(&pkAudioIn);
    }
    if(bVideoInEOF&&bAudioInEOF)
        break;
}


    


    I know the previous code looks like redundant but I wanted to leave both streams "processing" separated the way you understand my plans.

    


    Anyway, that code ends quickly with "av_interleaved_write_frame(audio out) failed".

    


    The error detail is "Invalid argument", and the debugger shows this :

    


    


    Application provided invalid, non monotonically increasing dts to
muxer in stream 0.

    


    


    If I disable any of the main blocks "if(!bVideoInEOF)" / "if(!bAudioInEOF)", the file is written successfully, with the obvious lack of the disabled stream.

    


    I'm new into using this library so probably I'm doing something really stupid, or missing something obvious.

    


    Suggestions ?

    


    Edit (2022-07-19) :

    


    By checking the logs, I noticed I was writing every frame to the stream #0. Hence, the horrible jumps in PTS/DTS.

    


    Code edited by adding the corresponding "...->stream_index=" before each call to av_interleaved_write_frame().

    


    ...

    


    Though it works, I still think my code is far from perfect. Comments are welcome.

    


  • How to send encoded video (or audio) data from server to client in a way that's decodable by webcodecs API using minimal latency and data overhead

    11 janvier 2023, par Tiger Yang

    My question (read entire post for context) :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this.

    


    Context and problem :

    


    I'm working on a remote desktop implementation that consists of a server that captures and encodes the display and speakers using FFmpeg and forwards it via pipe to a go (language) program which sends it on two unidirectional webtransport streams to my client, which I plan to decode using the webcodecs API. According to MDN, the video decoder needs to be fed via .configure() an object containing the following : https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder/configure before it's able to decode anything.

    


    same goes for the audio decoder : https://developer.mozilla.org/en-US/docs/Web/API/AudioDecoder/configure

    


    What I've tried so far :

    


    Because this remote desktop will be for my personal use only, it would only ever receive streams from a specific encoder configured in a specific way encoding video at a specific resolution, framerate, color space, etc.. Therefore, I took my video capture FFmpeg command...

    


    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }


    


    ...and instructed it to write to an mp4 file instead of outputting to pipe, and then I had this webcodecs demo https://w3c.github.io/webcodecs/samples/video-decode-display/ demux it using mp4box.js. Knowing that the demo outputs a proper .configure() object, I blindly copied it and had my client configure using that every time. Sadly, it didn't work, and I since noticed that the "description" part of the configure object changes despite the encoder and parameters being the same.

    


    I knew that mp4 files worked via mp4box, but they can't be streamed with low latency over a network, and additionally, ffmpeg's -f parameters specifies the muxer to use, but there are so many different types.

    


    At this point, I think I'm completely out of my depth, so :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this. (copied above)