Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (59)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (7271)

  • unable to load PHP_ffmpeg.dll

    2 mars 2012, par Alexis Zenigata

    Been trying to make this work but no luck.

    I always get

    PHP Warning : PHP Startup : Unable to load dynamic library 'C :\xampp\php\ext\php_ffmpeg.dll' - The specified module could not be found.

    1. Using XAMPP 1.7.4 on a Windows 7 64 Bit Computer.
    2. Tried several php_ffmpeg.dll files restart but I still encounter the error.
    3. Made sure that the dll is in C :\xampp\php\ext and copied the other DLL's on both system32 and SysWOW 64.
    4. Was previously using the newest version of WAMPP but no luck so decided to go to XAMPP with a lower version.

    If this doesn't work is there any other scripts I can use for converting uploaded audio files besides using FFMPEG ? uploaded Audio files has a .wav extension..

    Thanks

  • streaming H.264 over RTP with libavformat

    16 avril 2012, par Jacob Peddicord

    I've been trying over the past week to implement H.264 streaming over RTP, using x264 as an encoder and libavformat to pack and send the stream. Problem is, as far as I can tell it's not working correctly.

    Right now I'm just encoding random data (x264_picture_alloc) and extracting NAL frames from libx264. This is fairly simple :

    x264_picture_t pic_out;
    x264_nal_t* nals;
    int num_nals;
    int frame_size = x264_encoder_encode(this->encoder, &nals, &num_nals, this->pic_in, &pic_out);

    if (frame_size <= 0)
    {
       return frame_size;
    }

    // push NALs into the queue
    for (int i = 0; i < num_nals; i++)
    {
       // create a NAL storage unit
       NAL nal;
       nal.size = nals[i].i_payload;
       nal.payload = new uint8_t[nal.size];
       memcpy(nal.payload, nals[i].p_payload, nal.size);

       // push the storage into the NAL queue
       {
           // lock and push the NAL to the queue
           boost::mutex::scoped_lock lock(this->nal_lock);
           this->nal_queue.push(nal);
       }
    }

    nal_queue is used for safely passing frames over to a Streamer class which will then send the frames out. Right now it's not threaded, as I'm just testing to try to get this to work. Before encoding individual frames, I've made sure to initialize the encoder.

    But I don't believe x264 is the issue, as I can see frame data in the NALs it returns back.
    Streaming the data is accomplished with libavformat, which is first initialized in a Streamer class :

    Streamer::Streamer(Encoder* encoder, string rtp_address, int rtp_port, int width, int height, int fps, int bitrate)
    {
       this->encoder = encoder;

       // initalize the AV context
       this->ctx = avformat_alloc_context();
       if (!this->ctx)
       {
           throw runtime_error("Couldn't initalize AVFormat output context");
       }

       // get the output format
       this->fmt = av_guess_format("rtp", NULL, NULL);
       if (!this->fmt)
       {
           throw runtime_error("Unsuitable output format");
       }
       this->ctx->oformat = this->fmt;

       // try to open the RTP stream
       snprintf(this->ctx->filename, sizeof(this->ctx->filename), "rtp://%s:%d", rtp_address.c_str(), rtp_port);
       if (url_fopen(&(this->ctx->pb), this->ctx->filename, URL_WRONLY) < 0)
       {
           throw runtime_error("Couldn't open RTP output stream");
       }

       // add an H.264 stream
       this->stream = av_new_stream(this->ctx, 1);
       if (!this->stream)
       {
           throw runtime_error("Couldn't allocate H.264 stream");
       }

       // initalize codec
       AVCodecContext* c = this->stream->codec;
       c->codec_id = CODEC_ID_H264;
       c->codec_type = AVMEDIA_TYPE_VIDEO;
       c->bit_rate = bitrate;
       c->width = width;
       c->height = height;
       c->time_base.den = fps;
       c->time_base.num = 1;

       // write the header
       av_write_header(this->ctx);
    }

    This is where things seem to go wrong. av_write_header above seems to do absolutely nothing ; I've used wireshark to verify this. For reference, I use Streamer streamer(&enc, "10.89.6.3", 49990, 800, 600, 30, 40000); to initialize the Streamer instance, with enc being a reference to an Encoder object used to handle x264 previously.

    Now when I want to stream out a NAL, I use this :

    // grab a NAL
    NAL nal = this->encoder->nal_pop();
    cout << "NAL popped with size " << nal.size << endl;

    // initalize a packet
    AVPacket p;
    av_init_packet(&p);
    p.data = nal.payload;
    p.size = nal.size;
    p.stream_index = this->stream->index;

    // send it out
    av_write_frame(this->ctx, &p);

    At this point, I can see RTP data appearing over the network, and it looks like the frames I've been sending, even including a little copyright blob from x264. But, no player I've used has been able to make any sense of the data. VLC quits wanting an SDP description, which apparently isn't required.

    I then tried to play it through gst-launch :

    gst-launch udpsrc port=49990 ! rtph264depay ! decodebin ! xvimagesink

    This will sit waiting for UDP data, but when it is received, I get :

    ERROR : element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0 : No RTP
    format was negotiated. Additional debug info :
    gstbasertpdepayload.c(372) : gst_base_rtp_depayload_chain () :
    /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0 : Input buffers
    need to have RTP caps set on them. This is usually achieved by setting
    the 'caps' property of the upstream source element (often udpsrc or
    appsrc), or by putting a capsfilter element before the depayloader and
    setting the 'caps' property on that. Also see
    http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README

    As I'm not using GStreamer to stream itself, I'm not quite sure what it means with RTP caps. But, it makes me wonder if I'm not sending enough information over RTP to describe the stream. I'm pretty new to video and I feel like there's some key thing I'm missing here. Any hints ?

  • Finding Optimal Code Coverage

    7 mars 2012, par Multimedia Mike — Programming

    A few months ago, I published a procedure for analyzing code coverage of the test suites exercised in FFmpeg and Libav. I used it to add some more tests and I have it on good authority that it has helped other developers fill in some gaps as well (beginning with students helping out with the projects as part of the Google Code-In program). Now I’m wondering about ways to do better.

    Current Process
    When adding a test that depends on a sample (like a demuxer or decoder test), it’s ideal to add a sample that’s A) small, and B) exercises as much of the codebase as possible. When I was studying code coverage statistics for the WC4-Xan video decoder, I noticed that the sample didn’t exercise one of the 2 possible frame types. So I scouted samples until I found one that covered both types, trimmed the sample down, and updated the coverage suite.

    I started wondering about a method for finding the optimal test sample for a given piece of code, one that exercises every code path in a module. Okay, so that’s foolhardy in the vast majority of cases (although I was able to add one test spec that pushed a module’s code coverage from 0% all the way to 100% — but the module in question only had 2 exercisable lines). Still, given a large enough corpus of samples, how can I find the smallest set of samples that exercise the complete codebase ?

    This almost sounds like an NP-complete problem. But why should that stop me from trying to find a solution ?

    Science Project
    Here’s the pitch :

    • Instrument FFmpeg with code coverage support
    • Download lots of media to exercise a particular module
    • Run FFmpeg against each sample and log code coverage statistics
    • Distill the resulting data in some meaningful way in order to obtain more optimal code coverage

    That first step sounds harsh– downloading lots and lots of media. Fortunately, there is at least one multimedia format in the projects that tends to be extremely small : ANSI. These are files that are designed to display elaborate scrolling graphics using text mode. Further, the FATE sample currently deployed for this test (TRE_IOM5.ANS) only exercises a little less than 50% of the code in libavcodec/ansi.c. I believe this makes the ANSI video decoder a good candidate for this experiment.

    Procedure
    First, find a site that hosts a lot ANSI files. Hi, sixteencolors.net. This site has lots (on the order of 4000) artpacks, which are ZIP archives that contain multiple ANSI files (and sometimes some other files). I scraped a list of all the artpack names.

    In an effort to be responsible, I randomized the list of artpacks and downloaded periodically and with limited bandwidth ('wget --limit-rate=20k').

    Run ‘gcov’ on ansi.c in order to gather the full set of line numbers to be covered.

    For each artpack, unpack the contents, run the instrumented FFmpeg on each file inside, run ‘gcov’ on ansi.c, and log statistics including the file’s size, the file’s location (artpack.zip:filename), and a comma-separated list of line numbers touched.

    Definition of ‘Optimal’
    The foregoing procedure worked and yielded useful, raw data. Now I have to figure out how to analyze it.

    I think it’s most desirable to have the smallest files (in terms of bytes) that exercise the most lines of code. To that end, I sorted the results by filesize, ascending. A Python script initializes a set of all exercisable line numbers in ansi.c, then iterates through each each file’s stats line, adding the file to the list of candidate samples if its set of exercised lines can remove any line numbers from the overall set of lines. Ideally, that set of lines should devolve to an empty set.

    I think a second possible approach is to find the single sample that exercises the most code and then proceed with the previously described method.

    Initial Results
    So far, I have analyzed 13324 samples from 357 different artpacks provided by sixteencolors.net.

    Using the first method, I can find a set of samples that covers nearly 80% of ansi.c :

    <br />
    0 bytes: bad-0494.zip:5<br />
    1 bytes: grip1293.zip:-ANSI---.---<br />
    1 bytes: pur-0794.zip:.<br />
    2 bytes: awe9706.zip:-ANSI───.───<br />
    61 bytes: echo0197.zip:-(ART)-<br />
    62 bytes: hx03.zip:HX005.DAT<br />
    76 bytes: imp-0494.zip:IMPVIEW.CFG<br />
    82 bytes: ice0010b.zip:_cont'd_.___<br />
    101 bytes: bdp-0696.zip:BDP2.WAD<br />
    112 bytes: plain12.zip:--------.---<br />
    181 bytes: ins1295v.zip:-°VGA°-.  н<br />
    219 bytes: purg-22.zip:NEM-SHIT.ASC<br />
    289 bytes: srg1196.zip:HOWTOREQ.JNK<br />
    315 bytes: karma-04.zip:FASHION.COM<br />
    318 bytes: buzina9.zip:ox-rmzzy.ans<br />
    411 bytes: solo1195.zip:FU-BLAH1.RIP<br />
    621 bytes: ciapak14.zip:NA-APOC1.ASC<br />
    951 bytes: lght9404.zip:AM-TDHO1.LIT<br />
    1214 bytes: atb-1297.zip:TX-ROKL.ASC<br />
    2332 bytes: imp-0494.zip:STATUS.ANS<br />
    3218 bytes: acepak03.zip:TR-STAT5.ANS<br />
    6068 bytes: lgc-0193.zip:LGC-0193.MEM<br />
    16778 bytes: purg-20.zip:EZ-HIR~1.JPG<br />
    20582 bytes: utd0495.zip:LT-CROW3.ANS<br />
    26237 bytes: quad0597.zip:MR-QPWP.GIF<br />
    29208 bytes: mx-pack17.zip:mx-mobile-source-logo.jpg<br />
    ----<br />
    109440 bytes total<br />

    A few notes about that list : Some of those filenames are comprised primarily of control characters. 133t, and all that. The first file is 0 bytes. I wondered if I should discard 0-length files but decided to keep those in, especially if they exercise lines that wouldn’t normally be activated. Also, there are a few JPEG and GIF files in the set. I should point out that I forced the tty demuxer using -f tty and there isn’t much in the way of signatures for this format. So, again, whatever exercises more lines is better.

    Using this same corpus, I tried approach 2– which single sample exercises the most lines of the decoder ? Answer : blde9502.zip:REQUEST.EXE. Huh. I checked it out and ‘file’ ID’s it as a MS-DOS executable. So, that approach wasn’t fruitful, at least not for this corpus since I’m forcing everything through this narrow code path.

    Think About The Future
    Where can I take this next ? The cloud ! I have people inside the search engine industry who have furnished me with extensive lists of specific types of multimedia files from around the internet. I also see that Amazon Web Services Elastic Compute Cloud (AWS EC2) instances don’t charge for incoming bandwidth.

    I think you can see where I’m going with this.

    See Also :