Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (7880)

  • Ffmpeg decoder yuv420p

    4 décembre 2015, par user2466514

    I work on a video player yuv420p with ffmpeg but it’s not working and i can’t find out why. I spend the whole week on it...

    So i have a test which just decode some frame and read it, but the output always differ, and it’s really weird.

    I use a video (mp4 yuv420p) which color one black pixel in more each frame :

    For the video, put http://sendvid.com/b1sgf8r1 on a website like http://www.telechargerunevideo.com/en/

    VideoContext is just a little struct :

    struct  VideoContext {

    unsigned int      currentFrame;
    std::size_t       size;
    int               width;
    int               height;
    bool              pause;
    AVFormatContext*  formatCtx;
    AVCodecContext*   codecCtxOrig;
    AVCodecContext*   codecCtx;
    int               streamIndex;
    };

    So i have a function to count the number of black pixels :

    std::size_t  checkFrameNb(const AVFrame* frame) {

       std::size_t  nb = 0;

       for (int y = 0; y < frame->height; ++y) {
         for (int x = 0 ; x < frame->width; ++x) {

           if (frame->data[0][(y * frame->linesize[0]) + x] == BLACK_FRAME.y
               && frame->data[1][(y / 2 * frame->linesize[1]) + x / 2] == BLACK_FRAME.u
               && frame->data[2][(y / 2 * frame->linesize[2]) + x / 2] == BLACK_FRAME.v)
             ++nb;
         }
       }
       return nb;
     }

    And this is how i decode one frame :

    const AVFrame*  VideoDecoder::nextFrame(entities::VideoContext& context) {

     int frameFinished;
     AVPacket packet;

     // Allocate video frame
     AVFrame*  frame = av_frame_alloc();
     if(frame == nullptr)
       throw;

     // Initialize frame->linesize
     avpicture_fill((AVPicture*)frame, nullptr, AV_PIX_FMT_YUV420P, context.width, context.height);

     while(av_read_frame(context.formatCtx, &packet) >= 0) {

       // Is this a packet from the video stream?
       if(packet.stream_index == context.streamIndex) {
         // Decode video frame
         avcodec_decode_video2(context.codecCtx, frame, &frameFinished, &packet);

         // Did we get a video frame?
         if(frameFinished) {

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           ++context.currentFrame;
           return frame;
         }
       }
     }

     // Free the packet that was allocated by av_read_frame
     av_free_packet(&packet);
     throw core::GlobalException("nextFrame", "Frame decode failed");
    }

    There is already something wrong ?

    Maybe the context initialization will be useful :

    entities::VideoContext  VideoLoader::loadVideoContext(const char* file,
                                                         const int width,
                                                         const int height) {

     entities::VideoContext  context;

     // Register all formats and codecs
     av_register_all();

     context.formatCtx = avformat_alloc_context();


     // Open video file
     if(avformat_open_input(&context.formatCtx, file, nullptr, 0) != 0)
       throw; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(context.formatCtx, nullptr) > 0)
       throw; // Couldn't find stream information

     // Dump information about file onto standard error
     //av_dump_format(m_formatCtx, 0, file, 1);


     // Find the first video stream because we don't need more
     for(unsigned int i = 0; i < context.formatCtx->nb_streams; ++i)
       if(context.formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         context.streamIndex = i;
         context.codecCtx = context.formatCtx->streams[i]->codec;
         break;
       }
     if(context.codecCtx == nullptr)
       throw; // Didn't find a video stream


     // Find the decoder for the video stream
     AVCodec*  codec = avcodec_find_decoder(context.codecCtx->codec_id);
     if(codec == nullptr)
       throw; // Codec not found
     // Copy context
     if ((context.codecCtxOrig = avcodec_alloc_context3(codec)) == nullptr)
       throw;
     if(avcodec_copy_context(context.codecCtxOrig, context.codecCtx) != 0)
       throw; // Error copying codec context
     // Open codec
     if(avcodec_open2(context.codecCtx, codec, nullptr) < 0)
       throw; // Could not open codec

     context.currentFrame = 0;
     decoder::VideoDecoder::setVideoSize(context);
     context.pause = false;
     context.width = width;
     context.height = height;

     return std::move(context);
    }

    I know it’s not a little piece of code, if you have any idea too make an exemple more brief, go on.

    And if someone have an idea about this issue, there is my output :

    9 - 10 - 12 - 4 - 10 - 14 - 11 - 8 - 9 - 10

    But i want :

    1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10

    PS :
    get fps and video size are copy paste code of opencv

  • Edit audio files on the server with Meteor

    28 décembre 2015, par Mohammed Hussein

    I am building a Meteor application to split uploaded audio files.

    I upload the audio files and store them using GridFS :

    child = Npm.require('child_process');

    var fs = Npm.require('fs');

    storagePath= fs.realpathSync(process.env.PWD+'/audio');

    StaticServer.add('/audio', clipsPath);

    and then using a method I split the audio file using :

    child.exec(command);

    the command is the ffmpeg command used to cut the source audio file and store it on the storagePath.

    The application worked fine locally but when I tried to deploy it to digital ocean I got errors, stating that the file /audio does not exist.
    I use mupx to deploy and the error appears after "verifying deployment".

    Here is the error :

       -----------------------------------STDERR-----------------------------------
       eteor-dev-bundle@0.0.0 No README data
       => Starting meteor app on port:80

       /bundle/bundle/programs/server/node_modules/fibers/future.js:245
                                                       throw(ex);
                                                             ^
       Error: ENOENT, no such file or directory '/bundle/bundle/audio'
           at Object.fs.lstatSync (fs.js:691:18)
           at Object.realpathSync (fs.js:1279:21)
           at server/startup.js:10:20
           at /bundle/bundle/programs/server/boot.js:249:5
       npm WARN package.json meteor-dev-bundle@0.0.0 No description
       npm WARN package.json meteor-dev-bundle@0.0.0 No repository field.
       npm WARN package.json meteor-dev-bundle@0.0.0 No README data
       => Starting meteor app on port:80

       /bundle/bundle/programs/server/node_modules/fibers/future.js:245
                                                       throw(ex);
                                                             ^
       Error: ENOENT, no such file or directory '/bundle/bundle/audio'
           at Object.fs.lstatSync (fs.js:691:18)
           at Object.realpathSync (fs.js:1279:21)
           at server/startup.js:10:20
           at /bundle/bundle/programs/server/boot.js:249:5

       => Redeploying previous version of the app

       -----------------------------------STDOUT-----------------------------------

       To see more logs type 'mup logs --tail=50'

       ----------------------------------------------------------------------------

    The main question is how to generate an output file using ffmpeg and store it in a place where I can access it and display it in the browser.

  • How to store a raw RTSP video stream to a file ?

    12 janvier 2016, par Siva Prasanna

    I’m playing around with the open source FFMpeg tool. I want to save an RTSP video stream to a local file. I came across this question and I tried executing a similar command, but its not working. It doesn’t throw any error either. My command is :
    ffmpeg -i rtsp://test.vibrtech.com/mov/video.sav?MAC=00C2100F124^&channel=2^&GUID=betauser -acodec copy -vcodec copy c:/video.mp4

    but when I execute this command all I’m getting is this(with the cursor being blinked like forever, until I manually invoke ctrl+c) :

    D:\..\bin>ffmpeg -i rtsp://test.vibrtech.com/mov/video.sav?MAC=00C2100F124^&channel=2^&GUID=betauser -acodec copy -vcodec copy c:/video.mp4

    ffmpeg version N-77704-g68eb208 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.2.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls
    --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca
    --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm
    --enable-libilbc --enable-libmodplug --enable-libmp3lame
    --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg
    --enable-libopus --enable-librtmp --enable-libschroedinger
    --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame
    --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc
    --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
    --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid
    --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 12.100 / 55. 12.100
     libavcodec     57. 21.100 / 57. 21.100
     libavformat    57. 21.100 / 57. 21.100
     libavdevice    57.  0.100 / 57.  0.100
     libavfilter     6. 23.100 /  6. 23.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    _

    Can anyone tell me what I’m doing wrong here ? Or is there any way this could be achieved by using other commands ?

    P.S : I’m getting the stream from this url : rtsp ://test.vibrtech.com/mov/video.sav ?MAC=00C2100F124&channel=2&GUID=betaUser