Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (104)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (9494)

  • FFMpeg Coding in C : Encoder returns EOF at first interaction. Encoder not opened correctly ? [closed]

    26 février, par Davidhohey

    as I'm fairly new to FFMpeg Programming and C in general, the code looks like a mess.

    


    I have smashed my head against a wall trying to get this code to work for about a week.

    


    int decode_encode_pipeline(AVFormatContext *Input_Format_Context, AVFormatContext *Output_Format_Context, int *streams_list){

    const AVCodec *DECodec, *ENCodec;
    AVCodecContext *DECodecContext = NULL, *ENCodecContext = NULL;
    AVCodecParameters *CodecParameters = NULL;
    AVDictionary *opts = NULL;
    AVPacket *Packet;
    AVFrame *Frame;
    int check;

    Packet = av_packet_alloc();
    if(!Packet){
    
        printf("\nFehler bei Allocating Packet");
    
        return 0;
    
    }

    Frame = av_frame_alloc();
    if(!Frame){
    
        printf("\nFehler bei Allocating Frame");
    
        return 0;
    
    }

    CodecParameters = Input_Format_Context->streams[Packet->stream_index]->codecpar;
    if(!CodecParameters){

        printf("\nCodecParameters konnte nicht erstellt oder zugewiesen werden.");

    }

    DECodec = avcodec_find_decoder(CodecParameters->codec_id);
    if(!DECodec){
    
        printf("\nCodec nicht gefunden");
    
        return 0;
    
    }

    DECodecContext = avcodec_alloc_context3(DECodec);
    if (!DECodecContext){
    
        printf("\nFehler bei Allocating CodecContext");
    
        return 0;
    
    }

    ENCodec = avcodec_find_encoder(CodecParameters->codec_id);
    if(!DECodec){
    
        printf("\nCodec nicht gefunden");
    
        return 0;
    
    }

    ENCodecContext = avcodec_alloc_context3(ENCodec);
    if (!ENCodecContext){
    
        printf("\nFehler bei Allocating CodecContext");
    
        return 0;
    
    }

    check = avformat_write_header(Output_Format_Context, &opts);
    if(check < 0){

        printf("\nFehler beim Öffnen des Output Files.");
        
        return 1;

    }

    avcodec_parameters_to_context(DECodecContext, CodecParameters);
    avcodec_parameters_to_context(ENCodecContext, CodecParameters);

    ENCodecContext->width = DECodecContext->width;
    ENCodecContext->height = DECodecContext->height;
    ENCodecContext->bit_rate = DECodecContext->bit_rate;
    ENCodecContext->time_base = (AVRational){1, 30};
    ENCodecContext->framerate = DECodecContext->framerate;
    ENCodecContext->gop_size = DECodecContext->gop_size;
    ENCodecContext->max_b_frames = DECodecContext->max_b_frames;
    ENCodecContext->pix_fmt = DECodecContext->pix_fmt;
    if(ENCodec->id == AV_CODEC_ID_H264){

        av_opt_set(ENCodecContext->priv_data, "preset", "slow", 0);

    }

    check = avcodec_open2(DECodecContext, DECodec, NULL);
    if(check < 0){
    
        printf("\nFehler bei Öffnen von DECodec");
    
        return 1;
    
    }

    check = avcodec_open2(ENCodecContext, ENCodec, NULL);
    if(check < 0){
    
        printf("\nFehler bei Öffnen von ENCodec");
    
        return 1;
    
    }

    while(1){
    
        check = av_read_frame(Input_Format_Context, Packet);
        if(check < 0){
        
            break;
        
        }

        AVStream *in_stream, *out_stream;

        in_stream  = Input_Format_Context->streams[Packet->stream_index];
        out_stream = Output_Format_Context->streams[Packet->stream_index];

        if(in_stream->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && Packet->stream_index == streams_list[Packet->stream_index]){

            check = avcodec_send_packet(DECodecContext, Packet);
            if(check < 0){

                printf("\nFehler bei Encoding");

                return 1;

            }

            AVPacket *EncodedPacket;
            EncodedPacket = av_packet_alloc();
            if(!EncodedPacket){
        
                printf("\nFehler bei Allocating Packet");
        
                return 1;
        
            }

            /*While Loop Decoding*/
            while(check >= 0){
    
                check = avcodec_receive_frame(DECodecContext, Frame);
                if(check == AVERROR(EAGAIN)){
        
                    continue;
        
                }else if(check == AVERROR_EOF){
                    
                    break;
                    
                }else if(check < 0){
        
                    printf("\nFehler bei Decoding");
        
                    return 1;
        
                }

                /*Convert Colorspace*/
                struct SwsContext *SwsContexttoRGB = sws_getContext(Frame->width, Frame->height, Frame->format, Frame->width, Frame->height, AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL);
                struct SwsContext *SwsContexttoOriginal = sws_getContext(Frame->width, Frame->height, AV_PIX_FMT_RGB24, Frame->width, Frame->height, Frame->format, SWS_BILINEAR, NULL, NULL, NULL);
                if(!SwsContexttoRGB || !SwsContexttoOriginal){

                    printf("\nSwsContext konnte nicht befüllt werden.");

                    return 1;

                }   

                if(Frame->linesize < 0){

                    printf("\nFehler: linesize ist negativ und nicht kompatibel\n");

                    return 1;

                }

                AVFrame *RGBFrame;
                RGBFrame = av_frame_alloc();
                if(!RGBFrame){

                    printf("\nFehler bei der Reservierung für den RGBFrame");

                    return 1;

                }
                /*
                int number_bytes = av_image_get_buffer_size(AV_PIX_FMT_RGB24, Frame->width, Frame->height, 1);
                if(number_bytes < 0){

                    printf("\nFehler bei der Berechnung der benoetigten Bytes fuer Konvertierung");

                    return 1;

                }
                
                uint8_t *rgb_buffer = (uint8_t *)av_malloc(number_bytes*sizeof(uint8_t));
                if(rgb_buffer == NULL){

                    printf("\nFehler bei der Reservierung für den RGBBuffer");

                    return 1;

                }

                check = av_image_fill_arrays(RGBFrame->data, RGBFrame->linesize, rgb_buffer, AV_PIX_FMT_RGB24, Frame->width, Frame->height, 1);
                if(check < 0){

                    printf("\nFehler bei der Zuweisung der RGB Daten");

                    return 1;

                }*/

                //sws_scale(SwsContexttoRGB, (const uint8_t * const *)Frame->data, Frame->linesize, 0, Frame->height, RGBFrame->data, RGBFrame->linesize);
                sws_scale_frame(SwsContexttoRGB, Frame, RGBFrame);
                printf("\nIch habe die Daten zu RGB konvertiert.");

                //sws_scale(SwsContexttoOriginal, (const uint8_t * const *)RGBFrame->data, RGBFrame->linesize, 0, Frame->height, Frame->data, Frame->linesize);
                sws_scale_frame(SwsContexttoOriginal, RGBFrame, Frame);
                printf("\nIch habe die Daten zurück ins Original konvertiert.");

                Frame->format = ENCodecContext->pix_fmt;
                Frame->width  = ENCodecContext->width;
                Frame->height = ENCodecContext->height;
                
                check = av_frame_get_buffer(Frame, 0);
                if(check < 0){
        
                    printf("\nFehler bei Allocating Frame Buffer");
        
                    return 1;
        
                }

                /* Encoding */
                check = av_frame_make_writable(Frame);
                if(check < 0){

                    printf("\nFehler bei Make Frame Writable");

                    return 1;

                }

                encode(ENCodecContext, Frame, EncodedPacket, Output_Format_Context);

                sws_freeContext(SwsContexttoRGB);
                sws_freeContext(SwsContexttoOriginal);
                av_frame_free(&RGBFrame);
                //av_free(rgb_buffer);

            }

            /* Flushing Encoder */
            encode(ENCodecContext, NULL, EncodedPacket, Output_Format_Context);

            //avcodec_flush_buffers(DECodecContext);
            //avcodec_flush_buffers(ENCodecContext);

            av_packet_free(&EncodedPacket);

        }else{

            av_interleaved_write_frame(Output_Format_Context, Packet);

        }

    }

    av_write_trailer(Output_Format_Context); 

    /* Memory Free */
    avcodec_free_context(&DECodecContext);
    avcodec_free_context(&ENCodecContext);
    avcodec_parameters_free(&CodecParameters);
    av_frame_free(&Frame);
    av_packet_free(&Packet);

    return 0;

}



    


    The function encode looks as follows :

    


    static void encode(AVCodecContext *ENCodecContext, AVFrame *Frame, AVPacket *EncodedPacket, AVFormatContext *Output_Format_Context){

    int check;



    check = avcodec_send_frame(ENCodecContext, Frame);
    if(check == AVERROR(EAGAIN)){
        printf("\nEAGAIN");
    } 
    if(check == AVERROR_EOF){
        printf("\nEOF");
    }
    if(check == AVERROR(EINVAL)){
        printf("\nEINVAL");
    }
    if(check == AVERROR(ENOMEM)){
        printf("\nENOMEM");
    }
    if(check < 0){

        printf("\nFehler bei Encoding Send Frame. Check = %d", check);

        return;

    }

    while(check >= 0){

        check = avcodec_receive_packet(ENCodecContext, EncodedPacket);
        if(check == AVERROR(EAGAIN) || check == AVERROR_EOF){

            return;

        }else if(check < 0){

            printf("\nFehler bei Encoding");

            return;

        }

        if (av_interleaved_write_frame(Output_Format_Context, EncodedPacket) < 0) {

            printf("\nFehler beim Muxen des Paketes.");
            break;

        }

        av_packet_unref(EncodedPacket);

    }

    return;

}


    


    The program should decode a video into the individual frames convert them to RGB24, so I can work with the raw data of the frame, then convert it back to the original format and encode the frames.

    


    The encoder doesn't play nice, as I get an EOF error at avcodec_send_frame().
But I couldn't figure it out why the encoder behaves like this.
And yes I have read the docs and example files, but either I'm massivly missing a crucial detail or I'm just ****.

    


    Any and all help will be and is massivly appreciated.

    


    PS. : The used libraries are libavutil, libavformat, libavcodec, libswscale. All installed with the "-dev" suffix through linux commandline. Should all be the version 7.0 libraries.

    


    Thanks in advance.
With best regards.

    


      

    • Read the docs
    • 


    • Shifting the encoding step out of the decoding while loop
    • 


    


  • Reading file stream from Google Cloud Storage to ffmpeg (using fluent-ffmpeg)

    27 juillet 2018, par ekuusi

    I’m trying to run ffmpeg on a NodeJS backend with fluent-ffmpeg, reading input files from Google Cloud Storage. Everything works fine if I download the file first :

    const file = storage
       .bucket('example_bucket')
       .file('examplefile.mp4');

    file.download({destination: 'test.mp4'}, (err) => {

       let command = ffmpeg()
       .input('test.mp4')
       .duration(10)
       .format('mp4');

       command.save('test_out.mp4');

    });

    res.json([{
       message: 'Command sent!'
    }]);

    But if I try to use a readable stream as the input, it fails :

    const file = storage
       .bucket('example_bucket')
       .file('examplefile.mp4');


    var filestream = file.createReadStream()

    let command = ffmpeg()
       .input(filestream)
       .duration(10)
       .format('mp4');


    command.save('test_out.mp4');
    });

    res.json([{
       message: 'Command sent!'
    }]);

    Here is the full output of ffmpeg when trying to do the conversion. It seems to read the details of the file fine but for some reason it fails, saying "Cannot process video : ffmpeg exited with code 1 : pipe:0 : Invalid data found when processing input"

    Spawned Ffmpeg with command: ffmpeg -i pipe:0 -y -t 10 -f mp4 test_out.mp4
    Stderr output: ffmpeg version 4.0.1 Copyright (c) 2000-2018 the FFmpeg developers
    Stderr output:   built with Apple LLVM version 9.1.0 (clang-902.0.39.2)
    Stderr output:   configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-avresample --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-librsvg --enable-libtheora --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libsoxr --enable-libspeex --enable-libass --enable-libbluray --enable-lzma --enable-gnutls --enable-fontconfig --enable-libfreetype --enable-libfribidi --disable-libjack --disable-libopencore-amrnb --disable-libopencore-amrwb --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --disable-indev=jack --enable-opencl --disable-outdev=xv --enable-audiotoolbox --enable-videotoolbox --enable-sdl2 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --arch=x86_64 --enable-x86asm --enable-libx265 --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
    Stderr output:   libavutil      56. 14.100 / 56. 14.100
    Stderr output:   libavcodec     58. 18.100 / 58. 18.100
    Stderr output:   libavformat    58. 12.100 / 58. 12.100
    Stderr output:   libavdevice    58.  3.100 / 58.  3.100
    Stderr output:   libavfilter     7. 16.100 /  7. 16.100
    Stderr output:   libavresample   4.  0.  0 /  4.  0.  0
    Stderr output:   libswscale      5.  1.100 /  5.  1.100
    Stderr output:   libswresample   3.  1.100 /  3.  1.100
    Stderr output:   libpostproc    55.  1.100 / 55.  1.100
    Stderr output: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fb6db80a200] stream 2, offset 0x30: partial file
    Stderr output: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fb6db80a200] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1920x1080, 4647 kb/s): unspecified pixel format
    Stderr output: Consider increasing the value for the 'analyzeduration' and 'probesize' options
    Stderr output: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pipe:0':
    Stderr output:   Metadata:
    Stderr output:     major_brand     : isom
    Stderr output:     minor_version   : 512
    Stderr output:     compatible_brands: isomiso2avc1mp41
    Stderr output:     encoder         : Lavf58.12.100
    Stderr output:     location-eng    : +60.2121+024.8754/
    Stderr output:     location        : +60.2121+024.8754/
    Stderr output:   Duration: 00:02:23.13, bitrate: N/A
    Stderr output:     Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), none, 1920x1080, 4647 kb/s, SAR 1:1 DAR 16:9, 59.94 fps, 59.94 tbr, 60k tbn, 120k tbc (default)
    Stderr output:     Metadata:
    Stderr output:       handler_name    : VideoHandler
    Stderr output:     Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Stderr output:     Metadata:
    Stderr output:       handler_name    : SoundHandler
    Stderr output:     Stream #0:2(eng): Data: none (tmcd / 0x64636D74)
    Stderr output:     Metadata:
    Stderr output:       handler_name    : TimeCodeHandler
    Stderr output: Stream mapping:
    Input is aac (mp4a / 0x6134706D) audio with h264 (avc1 / 0x31637661) video
    Stderr output:   Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    Stderr output:   Stream #0:1 -> #0:1 (aac (native) -> aac (native))
    Stderr output: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fb6db80a200] stream 0, offset 0x34: partial file
    Stderr output: pipe:0: Invalid data found when processing input
    Stderr output: Cannot determine format of input stream 0:0 after EOF
    Stderr output: Error marking filters as finished
    Stderr output: Conversion failed!
    Stderr output:
    Cannot process video: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
    Cannot determine format of input stream 0:0 after EOF
    Error marking filters as finished
    Conversion failed!
  • Python asyncio subprocess code returns "pipe closed by peer or os.write(pipe, data) raised exception."

    4 novembre 2022, par Duke Dougal

    I am trying to convert a synchronous Python process to asyncio. Any ideas what I am doing wrong ?

    


    This is the synchronous code which successfully starts ffmpeg and converts a directory of webp files into a video.

    


    import subprocess
import shlex
from os import listdir
from os.path import isfile, join

output_filename = 'output.mp4'
process = subprocess.Popen(shlex.split(f'ffmpeg -y -framerate 60 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 output.mp4'), stdin=subprocess.PIPE)

thepath = '/home/ubuntu/webpfiles/'
thefiles = [f for f in listdir(thepath) if isfile(join(thepath, f))]
for filename in thefiles:
    absolute_path = f'{thepath}{filename}'
    with open(absolute_path, 'rb') as f:
        process.stdin.write(f.read())

process.stdin.close()
process.wait()
process.terminate()


    


    This async code fails :

    


    from os import listdir
from os.path import isfile, join
import shlex
import asyncio

outputfilename = 'output.mp4'

async def write_stdin(proc):
    thepath = '/home/ubuntu/webpfiles/'
    thefiles = [f for f in listdir(thepath) if isfile(join(thepath, f))]
    thefiles.sort()
    for filename in thefiles:
        absolute_path = f'{thepath}{filename}'
        with open(absolute_path, 'rb') as f:
            await proc.communicate(input=f.read())

async def create_ffmpeg_subprocess():
    bin = f'/home/ubuntu/bin/ffmpeg'
    params = f'-y -framerate 60 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 {outputfilename}'
    proc = await asyncio.create_subprocess_exec(
        bin,
        *shlex.split(params),
        stdin=asyncio.subprocess.PIPE,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE,
    )
    return proc

async def start():
    loop = asyncio.get_event_loop()
    proc = await create_ffmpeg_subprocess()
    task_stdout = loop.create_task(write_stdin(proc))
    await asyncio.gather(task_stdout)

if __name__ == '__main__':
    asyncio.run(start())


    


    The output for the async code is :

    


    pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.


    


    etc - one line for each webp file