Recherche avancée

Médias (3)

Mot : - Tags -/Valkaama

Autres articles (52)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (11584)

  • iOS FFmpeg : Requested output format 'flv' is not a suitable output format

    14 novembre 2013, par user2992563

    I just upgraded my iOS compiled FFmpeg library to 1.2.1 and now I'm getting the following error message :
    Requested output format 'flv' is not a suitable output format.

    I tried changing format to 'avi' and 'mov' aswell, but it seems that no matter the format_name I pass I get the same error message.

    This is how I set up the format_name :

    avformat_alloc_output_context2(&file, NULL, "flv", cname)

    And this is how I write the stream packets :

    // Appends a data packet to the rtmp stream
    -(bool) writePacket: (Demuxer *) source
    {
    int code = 0;

    AVCodecContext
    *videoCodec = [source videoCodec],
    *audioCodec = [source audioCodec];

    // Write headers
    if(useHeaders)
    {
       AVStream
       *video = av_new_stream(file, VIDEO_STREAM),
       *audio = av_new_stream(file, AUDIO_STREAM);

       if (!video || !audio)
           @throw [NSException exceptionWithName: @"StreamingError" reason: @"Could not allocate streams" userInfo: nil];

       // Clone input codecs and extra data
       memcpy(video->codec, videoCodec, sizeof(AVCodecContext));
       memcpy(audio->codec, audioCodec, sizeof(AVCodecContext));

       video->codec->extradata = av_malloc(video->codec->extradata_size);
       audio->codec->extradata = av_malloc(audio->codec->extradata_size);

       memcpy(video->codec->extradata, videoCodec->extradata, video->codec->extradata_size);
       memcpy(audio->codec->extradata, audioCodec->extradata, audio->codec->extradata_size);

       // Use FLV codec tags
       video->codec->codec_tag = FLV_TAG_TYPE_VIDEO;
       audio->codec->codec_tag = FLV_TAG_TYPE_AUDIO;
       video->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       video->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       audio->codec->sample_rate = INPUT_AUDIO_SAMPLING_RATE;

       // Update time base
       video->codec->time_base.num = 1;
       audio->codec->time_base.den = video->codec->time_base.den = FLV_TIMEBASE;

       // Signal bitwise stream copying
       //video->stream_copy = audio->stream_copy = 1;

       if((code = avformat_write_header(file, NULL)))
           @throw [NSException exceptionWithName: @"StreamingError" reason: @"Could not write stream headers" userInfo: nil];
       useHeaders = NO;
    }
    bool isVideo;
    AVPacket *packet = [source readPacket: &isVideo];

    if(!packet)
       return NO;

    if(isVideo)
    {
       packet->stream_index = VIDEO_STREAM;
       packet->dts = packet->pts = videoPosition;
       videoPosition += packet->duration = FLV_TIMEBASE * packet->duration * videoCodec->ticks_per_frame * videoCodec->time_base.num / videoCodec->time_base.den;
    }
    else
    {
       packet->stream_index = AUDIO_STREAM;
       packet->dts = packet->pts = audioPosition;
       audioPosition += packet->duration = FLV_TIMEBASE * packet->duration / INPUT_AUDIO_SAMPLING_RATE;
    }

    packet->pos = -1;
    packet->convergence_duration = AV_NOPTS_VALUE;

    // This sometimes fails without being a critical error, so no exception is raised
    if((code = av_interleaved_write_frame(file, packet)))
       NSLog(@"Streamer::Couldn't write frame");

    av_free_packet(packet);
    return YES;
    }

    In the code above I had to comment out this line as stream_copy has been removed from the newest version of FFmpeg :

    video->stream_copy = audio->stream_copy = 1;

    Any help would be greatly appreciated !

  • Unhandled stream error in pipe : write EPIPE in Node.js

    28 novembre 2013, par Michael Romanenko

    The idea is to serve screenshots of RTSP video stream with Express.js server. There is a continuously running spawned openRTSP process in flowing mode (it's stdout is consumed by another ffmpeg process) :

    function spawnProcesses (camera) {
     var openRTSP = spawn('openRTSP', ['-c', '-v', '-t', camera.rtsp_url]),
         encoder = spawn('ffmpeg', ['-i', 'pipe:', '-an', '-vcodec', 'libvpx', '-r', 10, '-f', 'webm', 'pipe:1']);

     openRTSP.stdout.pipe(encoder.stdin);

     openRTSP.on('close', function (code) {
       if (code !== 0) {
         console.log('Encoder process exited with code ' + code);
       }
     });

     encoder.on('close', function (code) {
       if (code !== 0) {
         console.log('Encoder process exited with code ' + code);
       }
     });

     return { rtsp: openRTSP, encoder: encoder };
    }

    ...

    camera.proc = spawnProcesses(camera);

    There is an Express server with single route :

    app.get('/cameras/:id.jpg', function(req, res){
     var camera = _.find(cameras, {id: parseInt(req.params.id, 10)});
     if (camera) {
       res.set({'Content-Type': 'image/jpeg'});
       var ffmpeg = spawn('ffmpeg', ['-i', 'pipe:', '-an', '-vframes', '1', '-s', '800x600', '-f', 'image2', 'pipe:1']);
       camera.proc.rtsp.stdout.pipe(ffmpeg.stdin);
       ffmpeg.stdout.pipe(res);
     } else {
       res.status(404).send('Not found');
     }
    });

    app.listen(3333);

    When i request http://localhost:3333/cameras/1.jpg i get desired image, but from time to time app breaks with error :

    stream.js:94
     throw er; // Unhandled stream error in pipe.
           ^
    Error: write EPIPE
       at errnoException (net.js:901:11)
       at Object.afterWrite (net.js:718:19)

    Strange thing is that sometimes it successfully streams image to res stream and closes child process without any error, but, sometimes, streams image and falls down.

    I tried to create on('error', ...) event handlers on every possible stream, tried to change pipe(...) calls to on('data',...) constructions, but could not succeed.

    My environment : node v0.10.22, OSX Mavericks 10.9.

    UPDATE :

    I wrapped spawn('ffmpeg',... block with try-catch :

    app.get('/cameras/:id.jpg', function(req, res){
    ....
       try {
         var ffmpeg = spawn('ffmpeg', ['-i', 'pipe:', '-an', '-vframes', '1', '-s', '800x600', '-f', 'image2', 'pipe:1']);
         camera.proc.rtsp.stdout.pipe(ffmpeg.stdin);
         ffmpeg.stdout.pipe(res);
       } catch (e) {
         console.log("Gotcha!", e);
       }
    ....
    });

    ... and this error disappeared, but log is silent, it doesn't catch any errors. What's wrong ?

  • Variable size for ffmpeg filter

    15 novembre 2013, par DrakaSAN

    I am creating a bunch of script to easily use ffmpeg.

    All my scripts now work, but a lot of value are hardcoded, which make them wanting exact size for the video (854x240, size of my tests videos)

    A good example is better than long explanation :

    ffmpeg -i 0.mp4 -vf "
    scale=854/2:-1 [low];
    color=c=black@1.0:s=854x240:r=29.97:d=30.0 [bg];
    movie=1.mp4, scale=854/2:-1 [high];
    [bg][high] overlay=0:0 [bgh];
    [bgh][low] overlay=854/2:-1" leftright.mp4

    It take 0.mp4 and 1.mp4 and put them side by side in the same video. But the dimension value are hardcoded. But it work.

    I know I can use iw and ih as "input width" and "input height" for the first scale filter, but when I try to put anything else as dimension for the color background, it just throw me a error :

    Unable to parse option value "iw:ih" as image size
    Unable to parse option value "iw:240" as image size
    Unable to parse option value "860*2:240" as image size

    and I end up to put 1720x240 again, which is really bad.

    Is there a way to bypass this, or to put input-dependent value ?

    Edit :

    Starting with video 1.mp4 (854x240) and 2.mp4 (520x520) (example value), put them side by side in out.mp4 (which in this case will have the dimension max(height)x(2xmax(width)), so in this case 854x1040), with black background.

     ax
    <---->
    .____.^
    |    ||
    |  1 ||
    |    ||ay
    |    ||
    |____|V

       bx
    <-------->
    .________.^
    |   2    ||by
    |________|V

    Will end up as

    ay>ax
    bx>by

       bx        bx
    <--------><-------->
    xx.____.xxxxxxxxxxxx^
    xx|    |xxxxxxxxxxxx|
    xx|  1 |xx.________.|
    xx|    |xx|    2   ||ay
    xx|    |xx|________||
    xx|____|xxxxxxxxxxxxV