Recherche avancée

Médias (91)

Autres articles (82)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

Sur d’autres sites (6806)

  • How to record a webcam to a file outside of X11 ?

    13 décembre 2017, par Dav Clark

    I’m working with teachers to automatically record their classes, so we can review them and improve the quality of teaching. We have computers running Ubuntu 17.10 with multiple webcams in a couple of classrooms - but I could run other software if it makes this task easier.

    I can successfully record a stream from the webcam to an h264 encoded file using gstreamer. The following should work for most people with gstreamer installed, but I’ve got fancier pipelines using vaapi that can simultaneously encode multiple 4k streams on a NUC with room to spare ! My point is that Gstreamer works great when I’m typing at a terminal in the GUI. The example :

    .\gst-launch-1.0.exe -e autovideosrc ! videoconvert ! \
     openh264enc max-bitrate=256000 ! h264parse ! \
     mp4mux ! filesink location=somefile.mp4

    I imagine I could also do this with ffmpeg, or OpenCV, or maybe even VLC (I can record a webcam via the GUI, so I guess I could use that to generate a command line ?).

    But when I tried any of the above, for example, via SSH, I get errors from GStreamer and OpenCV, and blank videos from ffmpeg (I haven’t tried VLC because I don’t currently have access to these machines). I need to automate - but I could potentially leave a user logged in. I just need to have some way to capture webcam to disk with some amount of reasonable compression.

    I naively thought I could throw something like the above into a cron job and I’d be good to go (intending to send a SIGINT to end recording). But anything that can be automatically scheduled somehow would be great.

    EDIT : Below is an approach I’m trying using ffmpeg. You can see from the output that I can’t figure out how to specify pixel_format in a way that ffmpeg pays attention to ! First, the command (using mkv because that seems to be a "low-stress" format, but have also tried mov and mp4) :

    ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128 \
     -f v4l2 -framerate 30 -video_size hd720 -pixel_format yuv420p -i /dev/video1 output.mkv

    Like I said, I’m trying to get hardware acceleration, and you can see below that VAAPI is working (but I think just for decoding). You can easily remove the options from the first line, and I get similar results either way. I didn’t include the header with compile options and library versions, as it’s standard Ubuntu 17.10.

    libva info: VA-API version 0.40.0
    libva info: va_getDriverName() returns 0
    libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
    libva info: Found init function __vaDriverInit_0_40
    libva info: va_openDriver() returns 0
    Input #0, video4linux2,v4l2, from '/dev/video1':
     Duration: N/A, start: 42437.238243, bitrate: 442368 kb/s
       Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 442368 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    File 'output.mkv' already exists. Overwrite ? [y/N] y
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    No pixel format specified, yuv422p for H.264 encoding chosen.
    Use -pix_fmt yuv420p for compatibility with outdated media players.
    [libx264 @ 0x55d1a26a71a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x55d1a26a71a0] profile High 4:2:2, level 3.1, 4:2:2 8-bit
    [libx264 @ 0x55d1a26a71a0] 264 - core 148 r2795 aaa9aa8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, matroska, to 'output.mkv':
     Metadata:
       encoder         : Lavf57.71.100
       Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuv422p, 1280x720, q=-1--1, 30 fps, 1k tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.89.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Past duration 0.717049 too large
    Past duration 0.879128 too large
    frame=  567 fps= 16 q=27.0 size=    2156kB time=00:00:34.16 bitrate= 516.9kbits/s speed=0.938x

    I exit with ctrl-C. Which results in what appears to be an orderly exit :

    [libx264 @ 0x55d1a26a71a0] frame I:11    Avg QP:15.75  size: 18573
    [libx264 @ 0x55d1a26a71a0] frame P:2176  Avg QP:19.91  size:  4435
    [libx264 @ 0x55d1a26a71a0] frame B:173   Avg QP:20.00  size:  3232
    [libx264 @ 0x55d1a26a71a0] consecutive B-frames: 90.1%  0.1%  0.6%  9.2%
    [libx264 @ 0x55d1a26a71a0] mb I  I16..4: 34.0% 56.1%  9.8%
    [libx264 @ 0x55d1a26a71a0] mb P  I16..4:  0.1%  1.2%  0.0%  P16..4: 32.7%  3.1%  6.1%  0.0%  0.0%    skip:56.8%
    [libx264 @ 0x55d1a26a71a0] mb B  I16..4:  0.0%  0.3%  0.0%  B16..8: 31.9%  0.7%  0.1%  direct: 1.4%  skip:65.6%  L0:41.8% L1:57.9% BI: 0.3%
    [libx264 @ 0x55d1a26a71a0] 8x8 transform intra:81.2% inter:92.4%
    [libx264 @ 0x55d1a26a71a0] coded y,uvDC,uvAC intra: 25.4% 20.1% 2.1% inter: 10.2% 7.4% 0.0%
    [libx264 @ 0x55d1a26a71a0] i16 v,h,dc,p: 78% 10%  7%  5%
    [libx264 @ 0x55d1a26a71a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  7%  6% 72%  2%  3%  3%  2%  2%  3%
    [libx264 @ 0x55d1a26a71a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 34% 21% 25%  4%  5%  3%  4%  1%  3%
    [libx264 @ 0x55d1a26a71a0] i8c dc,h,v,p: 69% 14% 15%  2%
    [libx264 @ 0x55d1a26a71a0] Weighted P-Frames: Y:1.7% UV:0.1%
    [libx264 @ 0x55d1a26a71a0] ref P L0: 49.3%  2.7% 29.6% 18.1%  0.3%
    [libx264 @ 0x55d1a26a71a0] ref B L0: 69.3% 24.1%  6.6%
    [libx264 @ 0x55d1a26a71a0] ref B L1: 86.6% 13.4%
    [libx264 @ 0x55d1a26a71a0] kb/s:529.95
    Exiting normally, received signal 2.
  • Use Named Pipe (C++) to send images to FFMPEG

    10 décembre 2017, par user1829136

    I have the following code in C++ :

    #include <iostream>
    #include
    #include <iostream>     // std::cout
    #include <fstream>      // std::ifstream
    #include <vector>
    #include

    using namespace std;

    int main(int argc, const char **argv)
    {
       wcout &lt;&lt; "Creating an instance of a named pipe..." &lt;&lt; endl;

       // Create a pipe to send data
       HANDLE pipe = CreateNamedPipe(
           L"\\\\.\\pipe\\my_pipe", // name of the pipe
           PIPE_ACCESS_OUTBOUND, // 1-way pipe -- send only
           PIPE_TYPE_BYTE, // send data as a byte stream
           1, // only allow 1 instance of this pipe
           0, // no outbound buffer
           0, // no inbound buffer
           0, // use default wait time
           NULL // use default security attributes
       );

       if (pipe == NULL || pipe == INVALID_HANDLE_VALUE) {
           wcout &lt;&lt; "Failed to create outbound pipe instance.";
           // look up error code here using GetLastError()
           system("pause");
           return 1;
       }

       wcout &lt;&lt; "Waiting for a client to connect to the pipe..." &lt;&lt; endl;

       // This call blocks until a client process connects to the pipe
       BOOL result = ConnectNamedPipe(pipe, NULL);
       if (!result) {
           wcout &lt;&lt; "Failed to make connection on named pipe." &lt;&lt; endl;
           // look up error code here using GetLastError()
           CloseHandle(pipe); // close the pipe
           system("pause");
           return 1;
       }

       wcout &lt;&lt; "Sending data to pipe..." &lt;&lt; endl;

       //opening file
       ifstream infile;
       infile.open("E:/xmen.jpg",std::ios::binary);
       ofstream out("E:/lelel.jpg",std::ios::binary);

       infile.seekg(0,std::ios::end);
       size_t file_size_in_byte = infile.tellg();
       vector<char> file_vec;

       file_vec.resize(file_size_in_byte);

       infile.seekg(0,std::ios::beg);
       infile.read(&amp;file_vec[0],file_size_in_byte);

       out.write(&amp;file_vec[0],file_vec.size());

       wcout&lt;/ This call blocks until a client process reads all the data
       DWORD numBytesWritten = 0;
       result = WriteFile(
           pipe, // handle to our outbound pipe
           &amp;file_vec[0], // data to send
           61026, // length of data to send (bytes)
           &amp;numBytesWritten, // will store actual amount of data sent
           NULL // not using overlapped IO
       );


       if (result) {
           wcout &lt;&lt; "Number of bytes sent: " &lt;&lt; numBytesWritten &lt;&lt; endl;
       } else {
           wcout &lt;&lt; "Failed to send data." &lt;&lt; endl;
           // look up error code here using GetLastError()
       }

       // Close the pipe (automatically disconnects client too)
       CloseHandle(pipe);

       wcout &lt;&lt; "Done." &lt;&lt; endl;

       system("pause");
       return 0;
    }
    </char></vector></fstream></iostream></iostream>

    Which I use to create a named pipe \.\pipe\my_pipe, to which FFMPEG connects to, using the following command :

    64-static\bin\Video>ffmpeg.exe -loop 1 -s 4cif -f image2 -y -i \\.\pipe\\my_pipe

    -r 25 -vframes 250 -vcodec rawvideo -an eaeew.mov

    Output :

    ffmpeg version N-54233-g86190af Copyright (c) 2000-2013 the FFmpeg developers
     built on Jun 27 2013 16:49:12 with gcc 4.7.3 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib  libavutil      52. 37.101 / 52. 37.101
     libavcodec     55. 17.100 / 55. 17.100
     libavformat    55. 10.100 / 55. 10.100
     libavdevice    55.  2.100 / 55.  2.100
     libavfilter     3. 77.101 /  3. 77.101
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    [image2 @ 0000000003ee04a0] Could find no file with with path '\\.\pipe\\my_pipe
    ' and index in the range 0-4
    \\.\pipe\\my_pipe: No such file or directory

    I can see on my console that my C++ app received a connection, but I get the error above in FFMPEG. Can someone please advise ?

    EDIT 1
    Using the command below

    ffmpeg.exe -s 4cif -i \\.\pipe\my_pipe -r 25 -vframes 250 -vcodec rawvideo -an tess.mov

    I get the following output

    ffmpeg version N-54233-g86190af Copyright (c) 2000-2013 the FFmpeg developers
     built on Jun 27 2013 16:49:12 with gcc 4.7.3 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
     libavutil      52. 37.101 / 52. 37.101
     libavcodec     55. 17.100 / 55. 17.100
     libavformat    55. 10.100 / 55. 10.100
     libavdevice    55.  2.100 / 55.  2.100
     libavfilter     3. 77.101 /  3. 77.101
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    \\.\pipe\my_pipe: Invalid data found when processing input

    So, now it seems it was able to connect to the pipe but is not able to process the input.

  • Slow, robotic audio encoding with Humble-Video api (ffmpeg)

    30 novembre 2017, par Walker Knapp

    I have a program that is trying to parse pcm_s16le audio samples from a .wav file and encode it into mp3 using the Humble-Video api.
    This isn’t what the final program is trying to do, but it outlines the problem I’m encountering.
    The issue is that the output audio files sound robotic and slow.

    input.wav (Just some random audio from a video game, ignore the wonky size headers) : https://drive.google.com/file/d/1nQOJGIxoSBDzprXExyTVNyyipSKQjyU0/view?usp=sharing

    output.mp3 :
    https://drive.google.com/file/d/1MfEFw2V7TiKS16SqSTv3wrbh6KoankIj/view?usp=sharing

    output.wav : https://drive.google.com/file/d/1XtDdCtYao0kS0Qe2l6JGu1tC5xvqt62f/view?usp=sharing

    import io.humble.video.*;

    import java.io.*;

    public class AudioEncodingTest {

       private static AudioChannel.Layout inLayout = AudioChannel.Layout.CH_LAYOUT_STEREO;
       private static int inSampleRate = 44100;
       private static AudioFormat.Type inFormat = AudioFormat.Type.SAMPLE_FMT_S16;
       private static int bytesPerSample = 2;

       private static File inFile = new File("input.wav");

       public static void main(String[] args) throws IOException, InterruptedException {
           render("output.mp3");
           render("output.wav");
       }

       public static void render(String filename) throws IOException, InterruptedException {

           //Starting everything up.

           Muxer muxer = Muxer.make(new File(filename).getAbsolutePath(), null, null);
           Codec codec = Codec.guessEncodingCodec(muxer.getFormat(), null, null, null, MediaDescriptor.Type.MEDIA_AUDIO);

           AudioFormat.Type findType = null;
           for(AudioFormat.Type type : codec.getSupportedAudioFormats()) {
               if(findType == null) {
                   findType = type;
               }
               if(type == inFormat) {
                   findType = type;
                   break;
               }
           }

           if(findType == null){
               throw new IllegalArgumentException("Couldn't find valid audio format for codec: " + codec.getName());
           }

           Encoder encoder = Encoder.make(codec);
           encoder.setSampleRate(44100);
           encoder.setTimeBase(Rational.make(1, 44100));
           encoder.setChannels(2);
           encoder.setChannelLayout(AudioChannel.Layout.CH_LAYOUT_STEREO);
           encoder.setSampleFormat(findType);
           encoder.setFlag(Coder.Flag.FLAG_GLOBAL_HEADER, true);

           encoder.open(null, null);
           muxer.addNewStream(encoder);
           muxer.open(null, null);

           MediaPacket audioPacket = MediaPacket.make();
           MediaAudioResampler audioResampler = MediaAudioResampler.make(encoder.getChannelLayout(), encoder.getSampleRate(), encoder.getSampleFormat(), inLayout, inSampleRate, inFormat);
           audioResampler.open();

           MediaAudio rawAudio = MediaAudio.make(1024/bytesPerSample, inSampleRate, 2, inLayout, inFormat);
           rawAudio.setTimeBase(Rational.make(1, inSampleRate));

           //Reading

           try(BufferedInputStream reader = new BufferedInputStream(new FileInputStream(inFile))){
               reader.skip(44);

               int totalSamples = 0;

               byte[] buffer = new byte[1024];
               int readLength;
               while((readLength = reader.read(buffer, 0, 1024)) != -1){
                   int sampleCount = readLength/bytesPerSample;

                   rawAudio.getData(0).put(buffer, 0, 0, readLength);
                   rawAudio.setNumSamples(sampleCount);
                   rawAudio.setTimeStamp(totalSamples);

                   totalSamples += sampleCount;

                   rawAudio.setComplete(true);

                   MediaAudio usedAudio = rawAudio;

                   if(encoder.getChannelLayout() != inLayout ||
                           encoder.getSampleRate() != inSampleRate ||
                           encoder.getSampleFormat() != inFormat){
                           usedAudio = MediaAudio.make(
                                   sampleCount,
                                   encoder.getSampleRate(),
                                   encoder.getChannels(),
                                   encoder.getChannelLayout(),
                                   encoder.getSampleFormat());
                           audioResampler.resample(usedAudio, rawAudio);
                   }

                   do{
                       encoder.encodeAudio(audioPacket, usedAudio);
                       if(audioPacket.isComplete()) {
                           muxer.write(audioPacket, false);
                       }
                   } while (audioPacket.isComplete());
               }
           }
           catch (IOException e){
               e.printStackTrace();
               muxer.close();
               System.exit(-1);
           }

           muxer.close();

       }
    }

    Edit

    I’ve gotten wave file exporting to work, however mp3s remain the same, which is very confusing. I changed the section counting how many samples each buffer of bytes is.

    MediaAudio rawAudio = MediaAudio.make(1024, inSampleRate, channels, inLayout, inFormat);
       rawAudio.setTimeBase(Rational.make(1, inSampleRate));

       //Reading

       try(BufferedInputStream reader = new BufferedInputStream(new FileInputStream(inFile))){
           reader.skip(44);

           int totalSamples = 0;

           byte[] buffer = new byte[1024 * bytesPerSample * channels];
           int readLength;
           while((readLength = reader.read(buffer, 0, 1024 * bytesPerSample * channels)) != -1){
               int sampleCount = readLength/(bytesPerSample * channels);

               rawAudio.getData(0).put(buffer, 0, 0, readLength);
               rawAudio.setNumSamples(sampleCount);
               rawAudio.setTimeStamp(totalSamples);