Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (101)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (10119)

  • Processing a single frame of audio and image in FFmpeg

    21 juillet 2015, par James F

    Currently we have an implementation of FFmpeg which is triggered from an ActionScript 3 (AS3) application, via CrossBridge (formerly Flascc). In this implementation, we write the entire audio track into the CModule’s memory, using malloc from the AS3 application. Once written, the application starts to process each of the image frames we would like to combine with our audio. This process begins by the AS3 application calling the CModule’s write_frame public method.

    C :
    int write_frame(struct Session *s, uint8_t *buffer, int bufferSize){}

    AS3 :
    var ret:int = writeFrame(_sessionPtr, _pixelBytesPtr, _pixelBytes.length);

    Once the video output has been created, it is retrieved from the CModule to AS3 as a byte array.

    With this implementation, a long duration video or audio track - the application runs out of memory (there’s a memory limit within our CrossBridge sandbox environment). The largest portion of this memory is currently our audio track, as it’s uncompressed PCM data (raw float values).

    Ideally, we would like to write a single audio frame and video frame together, with the AS3 application writing the 1 x audio frame byte array to the CModule’s memory. I have attempted to do this, by allocating the memory requirement for a single frame of audio using malloc, and then overwriting this memory, each time write_frame is called. However, this results in the video file containing a single frame of audio at the start of the video, and no other audio.

    I’m convinced that the audio frame is being constructed correctly, but I believe this approach is conflicting with some of the code within our Muxing.c file. It’s a little different to FFmpeg’s example file (https://ffmpeg.org/doxygen/trunk/muxing_8c-source.html), as this file has been modified by several people. Here’s the methods calls from within write_frame :

    fill_audio_buffer(s->audio_input, s->audio_input_length, s->audio_input_index, s->audio_input_frame_size * 2, s->audio_frame_buffer);

    retval = av_samples_alloc(converted_buffer, NULL, 2, out_samples, audio_st->codec->sample_fmt, 0);

    out_samples = swr_convert(s->audio_swr_context, converted_buffer, out_samples (void *) &s->audio_frame_buffer, in_samples);

    retval = write_audio_frame(s, s->oc, s->audio_st, s->audio_input_frame_size (uint16_t *) converted_buffer[0]);

    s->audio_input_index += s->audio_input_frame_size * 2;

    Is it possible to move to procedural muxing of 1 x frame of audio and 1 x frame of image approach ? Even if it’s slightly slower - it’ll mean we’re not hold the entire audio track in memory. Any suggestions to the required approach would be great, thanks in advance !

    @VC. One - The PCM data is made outside of FFmpeg and then written to the memory that FFmpeg has access to. (using malloc, and then the pointer to this address is sent to the FFmpeg).

    The FFmpeg output file can either be a .WMV file, or .AVi file - the codecs WMV2 and DIVX are used in each case. I have made some modifications since posting the original question, but you’re correct in thinking that the first chunk was being used and then the last frame size increased, meaning the next read of the buffer would yield nothing as it exceeded the buffer.

    I’ve now made some progress by resetting the index audio_input_index back to ’0’ at the start of each write_frame call. However, i’ll need to check whether this is the correct approach, as between each audio frame (1 second at 1fps), there is a slight blip/audio pop noise. In addition to this - the last few frames of audio seem to overlap, causing some of the audio to be repeated. Is it safe practice with C/FFmpeg to recycle a buffer in this way ? It seems that the length of each audio frame changes - at AS3 level my current calculation of the audio frame byte length is (44,100 kHz sample rate * 8) / Frames per second. It’s * 8 as it’s two channel, and each float value is 4 bytes.

    Thanks again for your help

  • JavaCV : avformat_open_input() hangs (not network, but with custom AVIOContext)

    14 octobre 2015, par Yun Tao Hai

    I’m using a custom AVIOContext to bridge FFMpeg with java IO. The function avformat_open_input() never returns. I have searched the web for similar problems, all of which were caused by faulty network or wrong server configurations. However, I’m not using network at all, as you can see in the following little program :

    package com.example;

    import org.bytedeco.javacpp.*;
    import java.io.File;
    import java.io.IOException;
    import java.io.RandomAccessFile;
    import static org.bytedeco.javacpp.avcodec.*;
    import static org.bytedeco.javacpp.avformat.*;
    import static org.bytedeco.javacpp.avutil.*;
    import static org.bytedeco.javacpp.avdevice.*;
    import static org.bytedeco.javacpp.avformat.AVFormatContext.*;

    public class Test {

       public static void main(String[] args) throws Exception {
           File dir = new File(System.getProperty("user.home"), "Desktop");
           File file = new File(dir, "sample.3gp");
           final RandomAccessFile raf = new RandomAccessFile(file, "r");

           Loader.load(avcodec.class);
           Loader.load(avformat.class);
           Loader.load(avutil.class);
           Loader.load(avdevice.class);
           Loader.load(swscale.class);
           Loader.load(swresample.class);

           avcodec_register_all();
           av_register_all();
           avformat_network_init();
           avdevice_register_all();

           Read_packet_Pointer_BytePointer_int reader = new Read_packet_Pointer_BytePointer_int() {
               @Override
               public int call(Pointer pointer, BytePointer buf, int bufSize) {
                   try {
                       byte[] data = new byte[bufSize]; // this is inefficient, just use as a quick example
                       int read = raf.read(data);

                       if (read <= 0) {
                           System.out.println("EOF found.");
                           return AVERROR_EOF;
                       }

                       System.out.println("Successfully read " + read + " bytes of data.");
                       buf.position(0);
                       buf.put(data, 0, read);
                       return read;
                   } catch (Exception ex) {
                       ex.printStackTrace();
                       return -1;
                   }
               }
           };

           Seek_Pointer_long_int seeker = new Seek_Pointer_long_int() {
               @Override
               public long call(Pointer pointer, long offset, int whence) {
                   try {
                       raf.seek(offset);
                       System.out.println("Successfully seeked to position " + offset + ".");
                       return offset;
                   } catch (IOException ex) {
                       return -1;
                   }
               }
           };

           int inputBufferSize = 32768;
           BytePointer inputBuffer = new BytePointer(av_malloc(inputBufferSize));
           AVIOContext ioContext = avio_alloc_context(inputBuffer, inputBufferSize, 1, null, reader, null, seeker);
           AVInputFormat format = av_find_input_format("3gp");
           AVFormatContext formatContext = avformat_alloc_context();
           formatContext.iformat(format);
           formatContext.flags(formatContext.flags() | AVFMT_FLAG_CUSTOM_IO);
           formatContext.pb(ioContext);

           // This never returns. And I can never get result.
           int result = avformat_open_input(formatContext, "", format, null);

           // all clean-up code omitted for simplicity
       }

    }

    And below is my sample console output :

    Successfully read 32768 bytes of data.
    Successfully read 32768 bytes of data.
    Successfully read 32768 bytes of data.
    Successfully read 32768 bytes of data.
    Successfully read 32768 bytes of data.
    Successfully read 7240 bytes of data.
    EOF found.

    I’ve checked the sum of bytes, which corresponds to the file size ; EOF is also hit, meaning the file is completely read. Actually I am a bit skeptical as why avformat_open_input() would even read the entire file and still without returning ? There must be something wrong with what I am doing. Can any expert shed some lights or point me to the right direction ? I’m new to javacv and ffmpeg and especially to programming with Buffers and stuff. Any help, suggestion or criticism is welcome. Thanks in advance.

  • FFSERVER - streaming an ASF video as Webm output

    30 mai 2014, par Emmanuel Brunet

    I’m trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump.

    Input stream

    $ ffprobe http://account:password@webcam/videostream.asf

    Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s

    ffserver configuration

    my ffserver configuration is :

    Port 8091
    RTSPPort 554
    BindAddress 192.168.1.62
    MaxHTTPConnections 1000
    MaxClients 100
    MaxBandwidth 1000
    CustomLog -

    <feed>
           File /tmp/webcam.ffm
           FileMaxSize 500M
           ACL allow localhost
           ACL allow 192.168.0.0 192.168.255.255

    </feed>

    <stream>              # Output stream URL definition
      Feed webcam.ffm              # Feed from which to receive video
      Format webm

      # Audio settings
      AudioCodec vorbis
      AudioBitRate 64             # Audio bitrate

      # Video settings
      VideoCodec libvpx
      VideoSize 640x480           # Video resolution
      VideoFrameRate 25           # Video FPS
      AVOptionVideo flags +global_header  # Parameters passed to encoder
                                          # (same as ffmpeg command-line parameters)
      AVOptionVideo cpu-used 0
      AVOptionVideo qmin 10
      AVOptionVideo qmax 42
      AVOptionVideo quality good
      AVOptionAudio flags +global_header
      PreRoll 15
      StartSendOnKey
      # VideoBitRate 32            # Video bitrate
    </stream>

    <stream>
           Format status
           # Only allow local people to get the status
           ACL allow localhost
           ACL allow 192.168.0.0 192.168.255.255
    </stream>

    ffmpeg feed

    I run the following command that fails

    $ ffmpeg  -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm
    http://192.168.1.62:8091/webcam.ffm
    Input #0, asf, from 'http://account:password@webcam/videostream.asf':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s
    [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly
    Segmentation fault

    I tryed

    $ ffmpeg  -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p  http://192.168.1.62:8091/webcam.ffm

    But it raises the same error.

    Thanks for your help

    Edit

    For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too.
    And thus with mirrored encoding so I changed the ffserver configuration to

    ...
    <stream>
       Feed webcam.ffm
       Format asf
       VideoFrameRate 25
       VideoSize 640X480
       VideoBitRate 256
       VideoBufferSize 1000
       VideoGopSize 30
       AudioBitRate 32
       StartSendOnKey
    </stream>
    ...

    And the output is now :

    Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s
    [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly
    Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm':
     Metadata:
       creation_time   : now
       encoder         : Lavf55.40.100
       Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s
       Metadata:
         encoder         : Lavc55.64.100 wmav2
       Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc
       Metadata:
    Stream mapping:
     Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2)
     Stream #0:0 -> #0:1 (mjpeg -> msmpeg4)
    Press [q] to stop, [?] for help
    Segmentation fault

    I can’t even forward the stream.