Recherche avancée

Médias (91)

Autres articles (51)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7659)

  • Libav AVFrame to Opencv Mat to AVPacket conversion

    14 mars 2018, par Davood Falahati

    I am new to libav and I am writing a video manipulation software which uses opencv as its heart. What I did is briefly as below :

    1- read the video packet

    2- decode the packet into AVFrame

    3- convert
    the AVFrame to CV Mat

    4- manipulate the Mat

    5- convert the CV Mat
    into AVFrame

    6- encode the AVFrame into AVPacket

    7- write the packet

    8- goto 1

    I read dranger tutorial in http://dranger.com/ffmpeg/tutorial01.html and I also used decoding_encoding example. I can read the video, extract video frames and convert them to CV Mat. My problem starts from converting from cv Mat to AVFrame and encode it to AVPacket.

    Would you please help me with this ?

    Here is my code :

    int main(int argc, char **argv)
    {
    AVOutputFormat *ofmt = NULL;
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket pkt;
    AVCodecContext    *pCodecCtx = NULL;
    AVCodec           *pCodec = NULL;
    AVFrame           *pFrame = NULL;
    AVFrame           *pFrameRGB = NULL;
    int videoStream=-1;
    int audioStream=-1;
    int               frameFinished;
    int               numBytes;
    uint8_t           *buffer = NULL;
    struct SwsContext *sws_ctx = NULL;
    FrameManipulation *mal_frame;

    const char *in_filename, *out_filename;
    int ret, i;
    if (argc < 3) {

       printf("usage: %s input output\n"
              "API example program to remux a media file with libavformat and libavcodec.\n"
              "The output format is guessed according to the file extension.\n"
              "\n", argv[0]);
       return 1;
    }
    in_filename  = arg[1];
    out_filename = arg[2];
    av_register_all();
    if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
       fprintf(stderr, "Could not open input file '%s'", in_filename);
       goto end;
    }

    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
       fprintf(stderr, "Failed to retrieve input stream information");
       goto end;
    }

    av_dump_format(ifmt_ctx, 0, in_filename, 0);
    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);

    if (!ofmt_ctx) {
       fprintf(stderr, "Could not create output context\n");
       ret = AVERROR_UNKNOWN;
       goto end;
    }

    ofmt = ofmt_ctx->oformat;

    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
       AVStream *in_stream = ifmt_ctx->streams[i];
       AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);

       if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO &&
          videoStream < 0) {
              videoStream=i;
       }

       if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO &&
          audioStream < 0) {
               audioStream=i;
       }

       if (!out_stream) {
           fprintf(stderr, "Failed allocating output stream\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ret = avcodec_copy_context(out_stream->codec, in_stream->codec);

       if (ret < 0) {
           fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
           goto end;
       }

       out_stream->codec->codec_tag = 0;

       if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
          out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    pCodec=avcodec_find_decoder(ifmt_ctx->streams[videoStream]->codec->codec_id);
    pCodecCtx = avcodec_alloc_context3(pCodec);

    if(avcodec_copy_context(pCodecCtx, ifmt_ctx->streams[videoStream]->codec) != 0) {
     fprintf(stderr, "Couldn't copy codec context");
     return -1; // Error copying codec context
    }

    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
      return -1; // Could not open codec

    // Allocate video frame
    pFrame=av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB=av_frame_alloc();

    // Determine required buffer size and allocate buffer
    numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, ifmt_ctx->streams[videoStream]->codec->width,
                    ifmt_ctx->streams[videoStream]->codec->height);

    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24,
           ifmt_ctx->streams[videoStream]->codec->width, ifmt_ctx->streams[videoStream]->codec->height);

    av_dump_format(ofmt_ctx, 0, out_filename, 1);

    if (!(ofmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
       if (ret < 0) {
           fprintf(stderr, "Could not open output file '%s'", out_filename);
           goto end;
       }
    }

    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
       fprintf(stderr, "Error occurred when opening output file\n");
       goto end;
    }

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture

    avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24,
                      ifmt_ctx->streams[videoStream]->codec->width,
                      ifmt_ctx->streams[videoStream]->codec->height);

    // initialize SWS context for software scaling
    sws_ctx = sws_getContext(
                ifmt_ctx->streams[videoStream]->codec->width,
                ifmt_ctx->streams[videoStream]->codec->height,
                ifmt_ctx->streams[videoStream]->codec->pix_fmt,
                ifmt_ctx->streams[videoStream]->codec->width,
                ifmt_ctx->streams[videoStream]->codec->height,
                AV_PIX_FMT_BGR24,
                SWS_BICUBIC,
                NULL,
                NULL,
                NULL
                );
    // Loop through packets
    while (1) {

       AVStream *in_stream, *out_stream;
       ret = av_read_frame(ifmt_ctx, &pkt);
       if(pkt.stream_index==videoStream)

        // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &pkt);

         if(frameFinished) {
                   sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                   pFrame->linesize, 0, pCodecCtx->height,
                   pFrameRGB->data, pFrameRGB->linesize);
                   cv::Mat img= mal_frame->process(
                             pFrameRGB,pFrame->width,pFrame->height);
    /* My problem is Here ------------*/


       avpicture_fill((AVPicture*)pFrameRGB,
                        img.data,
                        PIX_FMT_BGR24,
                        outStream->codec->width,
                        outStream->codec->height);

       pFrameRGB->width =  ifmt_ctx->streams[videoStream]->codec->width;
       pFrameRGB->height = ifmt_ctx->streams[videoStream]->codec->height;

               avcodec_encode_video2(ifmt_ctx->streams[videoStream]->codec ,
                                                        &pkt , pFrameRGB , &gotPacket);
    /*
    I get this error
    [swscaler @ 0x14b58a0] bad src image pointers
    [swscaler @ 0x14b58a0] bad src image pointers
    */

    /* My Problem Ends here ---------- */

       }

       if (ret < 0)

           break;

       in_stream  = ifmt_ctx->streams[pkt.stream_index];

       out_stream = ofmt_ctx->streams[pkt.stream_index];



       //log_packet(ifmt_ctx, &pkt, "in");

       /* copy packet */

       pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base,

                                  AV_ROUND_NEAR_INF);



       pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF);

       pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);

       pkt.pos = -1;

       log_packet(ofmt_ctx, &pkt, "out");

       ret = av_interleaved_write_frame(ofmt_ctx, &pkt);

       if (ret < 0) {

           fprintf(stderr, "Error muxing packet\n");

           break;

       }

       av_free_packet(&pkt);

    }

    av_write_trailer(ofmt_ctx);

    end:

    avformat_close_input(&ifmt_ctx);

    /* close output */

    if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))

       avio_closep(&ofmt_ctx->pb);

    avformat_free_context(ofmt_ctx);

    if (ret < 0 && ret != AVERROR_EOF) {

       return 1;

    }

    return 0;

    }

    When I run this code, I get unknown fatal error in this part :

      /* My problem is Here ------------*/


       avpicture_fill((AVPicture*)pFrameRGB,
                        img.data,
                        PIX_FMT_BGR24,
                        outStream->codec->width,
                        outStream->codec->height);

       pFrameRGB->width =  ifmt_ctx->streams[videoStream]->codec->width;
       pFrameRGB->height = ifmt_ctx->streams[videoStream]->codec->height;

               avcodec_encode_video2(ifmt_ctx->streams[videoStream]->codec ,
                                                        &pkt , pFrameRGB , &gotPacket);
    /*
    I get this error
    [swscaler @ 0x14b58a0] bad src image pointers
    [swscaler @ 0x14b58a0] bad src image pointers
    */

    /* My Problem Ends here ---------- */

    Here is where I want to convert back cv Mat to AVFrame and encode it to AVPacket. I appreciate your help.

  • Survey of CD Image Formats

    30 avril 2013, par Multimedia Mike — General

    In the course of exploring and analyzing the impressive library of CD images curated at the Internet Archive’s Shareware CD collection, one encounters a wealth of methods for copying a complete CD image onto other media for transport. In researching the formats, I have found that many of them are native to various binary, proprietary CD programs that operate under Windows. Since I have an interest in interpreting these image formats and I would also like to do so outside of Windows, I thought to conduct a survey to determine if enough information exists to write processing tools of my own.

    Remember from my Grand Unified Theory of Compact Disc that CDs, from a high enough level of software abstraction, are just strings of 2352-byte sectors broken up into tracks. The difference among various types of CDs comes down to the specific meaning of these 2352 bytes.

    Most imaging formats rip these strings of sectors into a giant file and then record some metadata information about the tracks and sectors.

    ISO
    This is perhaps the most common method for storing CD images. It’s generally only applicable to data CD-ROMs. File images generally end with a .iso extension. This refers to ISO-9660 which is the standard CD filesystem.

    Sometimes, disc images ripped from other types of discs (like Xbox/360 or GameCube discs) bear the extension .iso, which is a bit of a misnomer since they aren’t formatted using the ISO-9660 filesystem. But the extension sort of stuck.

    BIN / CUE
    I see the BIN & CUE file format combination quite frequently. Reportedly, a program named CDRWIN deployed this format first. This format can handle a mixed mode CD (e.g., starts with a data track and is followed by a series of audio tracks), whereas ISO can only handle the data track. The BIN file contains the raw data while the CUE file is a text file that defines how the BIN file is formatted (how many bytes in a sector, how many sectors to each individual track).

    CDI
    This originates from a program called DiscJuggler. This is extremely prevalent in the Sega Dreamcast hobbyist community for some reason. I studied the raw hex dumps of some sample CDI files but there was no obvious data (mostly 0s). There is an open source utility called cdi2iso which is able to extract an ISO image from a CDI file. The program’s source clued me in that the metadata is actually sitting at the end of the image file. This makes sense when you consider how a ripping program needs to operate– copy tracks, sector by sector, and then do something with the metadata after the fact. Options include : 1) Write metadata at the end of the file (as seen here) ; 2) write metadata into a separate file (seen in other formats on this list) ; 3) write the data at the beginning of the file which would require a full rewrite of the entire (usually large) image file (I haven’t seen this yet).

    Anyway, I believe I have enough information to write a program that can interpret a CDI file. The reason this format is favored for Dreamcast disc images is likely due to the extreme weirdness of Dreamcast discs (it’s complicated, but eventually fits into my Grand Unified Theory of CDs, if you look at it from a high level).

    MDF / MDS
    MDF and MDS pairs come from a program called Alcohol 120%. The MDF file has the data while the MDS file contains the metadata. The metadata is in an opaque binary format, though. Thankfully, the Wikipedia page links to a description of the format. That’s another image format down.

    CCD / SUB / IMG
    The CloneCD Control File is one I just ran across today thanks to a new image posted at the IA Shareware Archive (see Super Duke Volume 2). I haven’t found any definitive documentation on this, but it also doesn’t seen too complicated. The .ccd file is a text file that is pretty self-explanatory. The sample linked above, however, only has a .ccd file and a .sub file. I’m led to believe that the .sub file contains subchannel information while a .img file is supposed to contain the binary data. So this rip might be incomplete (nope, the .img file is on the page, in the sidebar ; thanks to Phil in the comments for pointing this out). The .sub file is a bit short compared to the Archive’s description of the disc’s contents (only about 4.6 MB of data) and when I briefly scrolled through, it didn’t look like it contains any real computer data. So it probably is just the disc’s subchannel data (something I glossed over in my Grand Unified Theory).

    CSO
    I have dealt with the CISO (compressed ISO) format before. It’s basically the same as a .iso file described above except that each individual 2048-byte data sector is compressed using zlib. The format boasts up to 9 compression levels, which shouldn’t be a big surprise since that correlates to zlib’s own compression tiers.

    Others
    Wikipedia has a category for optical disc image formats. Of course, there are numerous others. However, I haven’t encountered them in the wild for the purpose of broad image distribution.

  • FFMpeg estimated execution time

    14 juin 2017, par Juvi

    I’m using FFmpegAndroid library in my project to overlay a video.

    The ffmpeg process is inside a service and I want to show the user a notification with progress to determine the progress of the process.

    I’ve went through the outputs of the ffmpeg but there’s nothing that specify the estimated duration time.
    Maybe it’s possible to calculate it by other params that shown in the output such as fps, bitrate or speed but I have no clue..

    Any ideas ?