Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (58)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (9519)

  • Ffmpeg decoder yuv420p

    4 décembre 2015, par user2466514

    I work on a video player yuv420p with ffmpeg but it’s not working and i can’t find out why. I spend the whole week on it...

    So i have a test which just decode some frame and read it, but the output always differ, and it’s really weird.

    I use a video (mp4 yuv420p) which color one black pixel in more each frame :

    For the video, put http://sendvid.com/b1sgf8r1 on a website like http://www.telechargerunevideo.com/en/

    VideoContext is just a little struct :

    struct  VideoContext {

    unsigned int      currentFrame;
    std::size_t       size;
    int               width;
    int               height;
    bool              pause;
    AVFormatContext*  formatCtx;
    AVCodecContext*   codecCtxOrig;
    AVCodecContext*   codecCtx;
    int               streamIndex;
    };

    So i have a function to count the number of black pixels :

    std::size_t  checkFrameNb(const AVFrame* frame) {

       std::size_t  nb = 0;

       for (int y = 0; y < frame->height; ++y) {
         for (int x = 0 ; x < frame->width; ++x) {

           if (frame->data[0][(y * frame->linesize[0]) + x] == BLACK_FRAME.y
               && frame->data[1][(y / 2 * frame->linesize[1]) + x / 2] == BLACK_FRAME.u
               && frame->data[2][(y / 2 * frame->linesize[2]) + x / 2] == BLACK_FRAME.v)
             ++nb;
         }
       }
       return nb;
     }

    And this is how i decode one frame :

    const AVFrame*  VideoDecoder::nextFrame(entities::VideoContext& context) {

     int frameFinished;
     AVPacket packet;

     // Allocate video frame
     AVFrame*  frame = av_frame_alloc();
     if(frame == nullptr)
       throw;

     // Initialize frame->linesize
     avpicture_fill((AVPicture*)frame, nullptr, AV_PIX_FMT_YUV420P, context.width, context.height);

     while(av_read_frame(context.formatCtx, &packet) >= 0) {

       // Is this a packet from the video stream?
       if(packet.stream_index == context.streamIndex) {
         // Decode video frame
         avcodec_decode_video2(context.codecCtx, frame, &frameFinished, &packet);

         // Did we get a video frame?
         if(frameFinished) {

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           ++context.currentFrame;
           return frame;
         }
       }
     }

     // Free the packet that was allocated by av_read_frame
     av_free_packet(&packet);
     throw core::GlobalException("nextFrame", "Frame decode failed");
    }

    There is already something wrong ?

    Maybe the context initialization will be useful :

    entities::VideoContext  VideoLoader::loadVideoContext(const char* file,
                                                         const int width,
                                                         const int height) {

     entities::VideoContext  context;

     // Register all formats and codecs
     av_register_all();

     context.formatCtx = avformat_alloc_context();


     // Open video file
     if(avformat_open_input(&context.formatCtx, file, nullptr, 0) != 0)
       throw; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(context.formatCtx, nullptr) > 0)
       throw; // Couldn't find stream information

     // Dump information about file onto standard error
     //av_dump_format(m_formatCtx, 0, file, 1);


     // Find the first video stream because we don't need more
     for(unsigned int i = 0; i < context.formatCtx->nb_streams; ++i)
       if(context.formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         context.streamIndex = i;
         context.codecCtx = context.formatCtx->streams[i]->codec;
         break;
       }
     if(context.codecCtx == nullptr)
       throw; // Didn't find a video stream


     // Find the decoder for the video stream
     AVCodec*  codec = avcodec_find_decoder(context.codecCtx->codec_id);
     if(codec == nullptr)
       throw; // Codec not found
     // Copy context
     if ((context.codecCtxOrig = avcodec_alloc_context3(codec)) == nullptr)
       throw;
     if(avcodec_copy_context(context.codecCtxOrig, context.codecCtx) != 0)
       throw; // Error copying codec context
     // Open codec
     if(avcodec_open2(context.codecCtx, codec, nullptr) < 0)
       throw; // Could not open codec

     context.currentFrame = 0;
     decoder::VideoDecoder::setVideoSize(context);
     context.pause = false;
     context.width = width;
     context.height = height;

     return std::move(context);
    }

    I know it’s not a little piece of code, if you have any idea too make an exemple more brief, go on.

    And if someone have an idea about this issue, there is my output :

    9 - 10 - 12 - 4 - 10 - 14 - 11 - 8 - 9 - 10

    But i want :

    1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10

    PS :
    get fps and video size are copy paste code of opencv

  • Meteor edit audio files on the server

    28 décembre 2015, par Mohammed Hussein

    I am building a Meteor application to split uploaded audio files
    I upload the audio files and store them using GridFS

    child = Npm.require(’child_process’) ;

    var fs = Npm.require('fs');


    storagePath= fs.realpathSync(process.env.PWD+'/audio');
    StaticServer.add('/audio', clipsPath);

    and then using a method i split the audio file using
    child.exec(command) ;

    the command is the ffmpeg command used to cut the source audio file and store it on the storagePath

    the application worked fine locally but when I tried to deploy it to digital ocian i got errors , stating that the file /audio doesnot exist
    I use mupx to deploy and the error appears after "verifying deployment"

    here is the error

       -----------------------------------STDERR-----------------------------------
       eteor-dev-bundle@0.0.0 No README data
       => Starting meteor app on port:80

       /bundle/bundle/programs/server/node_modules/fibers/future.js:245
                                                       throw(ex);
                                                             ^
       Error: ENOENT, no such file or directory '/bundle/bundle/audio'
           at Object.fs.lstatSync (fs.js:691:18)
           at Object.realpathSync (fs.js:1279:21)
           at server/startup.js:10:20
           at /bundle/bundle/programs/server/boot.js:249:5
       npm WARN package.json meteor-dev-bundle@0.0.0 No description
       npm WARN package.json meteor-dev-bundle@0.0.0 No repository field.
       npm WARN package.json meteor-dev-bundle@0.0.0 No README data
       => Starting meteor app on port:80

       /bundle/bundle/programs/server/node_modules/fibers/future.js:245
                                                       throw(ex);
                                                             ^
       Error: ENOENT, no such file or directory '/bundle/bundle/audio'
           at Object.fs.lstatSync (fs.js:691:18)
           at Object.realpathSync (fs.js:1279:21)
           at server/startup.js:10:20
           at /bundle/bundle/programs/server/boot.js:249:5

       => Redeploying previous version of the app

       -----------------------------------STDOUT-----------------------------------

       To see more logs type 'mup logs --tail=50'

       ----------------------------------------------------------------------------

    the main quistion is , how to i generate an output file using ffmpeg command and store it in a place where I can access it and display it on the browser

  • Merging audio(aac) and video (h.264 in mp4 container) into a mp4 container using Xuggler

    26 août 2016, par Handroid

    Here is the code I am using

       String filenamevideo = videoFilePath;(video.mp4)

       String filenameaudio = audioAACFilePath; (audio.aac)



       IMediaWriter mWriter = ToolFactory.makeWriter(videoWithAudioFilePath); // output
       // file

       IContainer containerVideo = IContainer.make();
       IContainer containerAudio = IContainer.make();

       if (containerVideo.open(filenamevideo, IContainer.Type.READ, null) < 0)
           throw new IllegalArgumentException("Cant find " + filenamevideo);

       if (containerAudio.open(filenameaudio, IContainer.Type.READ, null) < 0)
           throw new IllegalArgumentException("Cant find " + filenameaudio);

       int numStreamVideo = containerVideo.getNumStreams();
       int numStreamAudio = containerAudio.getNumStreams();    

       int videostreamt = -1; // this is the video stream id
       int audiostreamt = -1;

       IStreamCoder videocoder = null;

       for (int i = 0; i < numStreamVideo; i++) {
           IStream stream = containerVideo.getStream(i);
           IStreamCoder code = stream.getStreamCoder();

           if (code.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
               videostreamt = i;
               videocoder = code;
               break;
           }

       }

       for (int i = 0; i < numStreamAudio; i++) {
           IStream stream = containerAudio.getStream(i);
           IStreamCoder code = stream.getStreamCoder();

           if (code.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO) {
               audiostreamt = i;
               break;
           }

       }

       if (videostreamt == -1)
           throw new RuntimeException("No video steam found");
       if (audiostreamt == -1)
           throw new RuntimeException("No audio steam found");

       if (videocoder.open() < 0)
           throw new RuntimeException("Cant open video coder");
       IPacket packetvideo = IPacket.make();

       IStreamCoder audioCoder = containerAudio.getStream(audiostreamt).getStreamCoder();

       if (audioCoder.open() < 0)
           throw new RuntimeException("Cant open audio coder");

       mWriter.addAudioStream(0, 0, ICodec.ID.CODEC_ID_AAC, audioCoder.getChannels(),audioCoder.getSampleRate());

       mWriter.addVideoStream(1, 0, ICodec.ID.CODEC_ID_H264, videocoder.getWidth(), videocoder.getHeight());


       IPacket packetaudio = IPacket.make();

       while (containerVideo.readNextPacket(packetvideo) >= 0 || containerAudio.readNextPacket(packetaudio) >= 0) {

           if (packetvideo.getStreamIndex() == videostreamt) {

               // video packet
               IVideoPicture picture = IVideoPicture.make(videocoder.getPixelType(), videocoder.getWidth(),
                       videocoder.getHeight());
               int offset = 0;
               while (offset < packetvideo.getSize()) {
                   int bytesDecoded = videocoder.decodeVideo(picture, packetvideo, offset);
                   if (bytesDecoded < 0)
                       throw new RuntimeException("bytesDecoded not working");
                   offset += bytesDecoded;
                   if (picture.isComplete()) {
                       // System.out.println(picture.getPixelType());
                       mWriter.encodeVideo(1, picture);
                   }
               }
           }

           if (packetaudio.getStreamIndex() == audiostreamt) {
               // audio packet
               IAudioSamples samples = IAudioSamples.make(512, audioCoder.getChannels(), IAudioSamples.Format.FMT_S32);
               int offset = 0;
               while (offset < packetaudio.getSize()) {
                   int bytesDecodedaudio = audioCoder.decodeAudio(samples, packetaudio, offset);
                   if (bytesDecodedaudio < 0)
                       throw new RuntimeException("could not detect audio");
                   offset += bytesDecodedaudio;
                   if (samples.isComplete()) {
                       mWriter.encodeAudio(0, samples);
                   }
               }

           }          
       }

    The output file (mp4) is generating , but unable to play it using (vlc) and in JavaFX scene media.

    Please help me with the inputs on the above code I’m using it in a correct way (Or) help me with the possible solution for merging audio(aac) and video(h264) to mp4 container.

    Thank in advance.