Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4609)

  • Wav File Encoding with FFMPEG

    7 septembre 2011, par user924702

    I want to convert raw PCM data(Taken from Android Phone mic) into a libGSM Wave file. After encoding into file, VLC player shows right codec information and duration but unable to play contents. Please help me to find what I am doing wrong.

    Below is my code for encoding and header writing :

    void EncodeTest(uint8_t *audioData, size_t audioSize)
    {
       AVCodecContext  *audioCodec;
       AVCodec *codec;
       uint8_t *buf;    int bufSize, frameBytes;
       __android_log_print(ANDROID_LOG_INFO, DEBUG_TAG,"Lets encode :%u with size %d\n",(int)audioData, (int)audioSize);
       //Set up audio encoder
       codec = avcodec_find_encoder(CODEC_ID_GSM);
       if (codec == NULL){
           __android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG,"ERROR:: Unable to find encoder(CODEC_ID_GSM)");
           codec = avcodec_find_encoder(CODEC_ID_GSM);
           if (codec == NULL){
               __android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG,"ERROR:: Unable to find encoder(CODEC_ID_GSM)");
               return;
           }
       }
       audioCodec                  = avcodec_alloc_context();
       audioCodec->channels        = 1;
       audioCodec->sample_rate     = 8000;
       audioCodec->sample_fmt      = SAMPLE_FMT_S16;
       audioCodec->bit_rate        = 13200;
       audioCodec->priv_data       = gsm_create();

       switch(audioCodec->codec_id) {
           case CODEC_ID_GSM:
               audioCodec->frame_size = GSM_FRAME_SIZE;
               audioCodec->block_align = GSM_BLOCK_SIZE;
               int one = 1;
               gsm_option(audioCodec->priv_data, GSM_OPT_WAV49, &one);
               break;
           case CODEC_ID_GSM_MS: {
               int one = 1;
               gsm_option(audioCodec->priv_data, GSM_OPT_WAV49, &one);
               audioCodec->frame_size = 2*GSM_FRAME_SIZE;
               audioCodec->block_align = GSM_MS_BLOCK_SIZE;
           }
       }
       audioCodec->coded_frame= avcodec_alloc_frame();
       audioCodec->coded_frame->key_frame= 1;
       audioCodec->time_base       = (AVRational){1,  audioCodec->sample_rate};
       audioCodec->codec_type      = CODEC_TYPE_AUDIO;

       if (avcodec_open(audioCodec, codec) < 0){
           __android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG,"ERROR:: Unable to avcodec_open");
           return;
       }

       bufSize     = FF_MIN_BUFFER_SIZE * 10;
       buf         = (uint8_t *)malloc(bufSize);
       if (buf == NULL) return;
       frameBytes = audioCodec->frame_size * audioCodec->channels * 2;
       FILE *fileWrite = fopen(FILE_NAME,"w+b");
       if(NULL == fileWrite){
           __android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG,"ERROR:: Unable to open file for reading.");
       }
       /*Write wave header*/
       WriteWav(fileWrite, 32505);/*Just for test*/

       /*Lets encode raw packet and write into file after header.*/
       __android_log_print(ANDROID_LOG_INFO, DEBUG_TAG,"Lets Encode Actual Bytes");
       int nChunckSize = 0;
       while (audioSize >= frameBytes)
       {
           int packetSize;

           packetSize = avcodec_encode_audio(audioCodec, buf, bufSize, (short *)audioData);
           __android_log_print(ANDROID_LOG_INFO, DEBUG_TAG,"Encoder returned %d bytes of data\n", packetSize);
           nChunckSize += packetSize;
           audioData += frameBytes;
           audioSize -= frameBytes;
           if(NULL != fileWrite){
               fwrite(buf, packetSize, 1, fileWrite);
           }
           else{
               __android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG,"Unable to open file for writting... NULL");
           }
       }
       if(NULL != fileWrite){
           fclose(fileWrite);
       }
       __android_log_print(ANDROID_LOG_INFO, DEBUG_TAG,"----- Done with nChunckSize: %d --- ",nChunckSize);
        __android_log_print(ANDROID_LOG_INFO, DEBUG_TAG,"*****************************");
       wavReadnDisplayHeader(FILE_NAME);
       __android_log_print(ANDROID_LOG_INFO, DEBUG_TAG,"*****************************");
       wavReadnDisplayHeader("/sdcard/Voicemail2.wav");
    }

    Header Writing :

    /** Writes WAV headers */
    void WriteWav(FILE *f, long int bytes)
    {
       /* quick and dirty */
       fwrite("RIFF",sizeof(char),4,f);                /*  0-3 */      //RIFF
       PutNum(bytesã8,f,1,4);                       /*  4-7 */      //ChunkSize
       fwrite("WAVEfmt ",sizeof(char),8,f);            /*  8-15 */     //WAVE Header + FMT header
       PutNum(16,f,1,4);                               /* 16-19 */     //Size of the fmt chunk
       PutNum(49,f,1,2);                                /* 20-21 */     //Audio format, 49=libgsm wave, 1=PCM,6=mulaw,7=alaw, 257=IBM Mu-Law, 258=IBM A-Law, 259=ADPCM
       PutNum(1,f,1,2);                                /* 22-23 */     //Number of channels 1=Mono 2=Sterio
       PutNum(8000,f,1,4);                             /* 24-27 */     //Sampling Frequency in Hz
       PutNum(2*8000,f,1,4);                           /* 28-31 */     //bytes per second /Sample/persec
       PutNum(2,f,1,2);                                /* 32-33 */     // 2=16-bit mono, 4=16-bit stereo
       PutNum(16,f,1,2);                                /* 34-35 */     // Number of bits per sample
       fwrite("data",sizeof(char),4,f);                /* 36-39 */    
       PutNum(bytes,f,1,4);                            /* 40-43 */     //Sampled data length  
    }

    Please help....

  • Working with ffmpeg in Xamarin Android

    20 avril 2017, par Ahmed Mujtaba

    I’m building an android app using Xamarin. The requirement of the app is to capture video from the camera and encode the video to send it across to a server. Initially I was using an encoder library on the server side to encode recorded video but it was proving to be extremely unreliable and inefficient specially for large sized video files. I have posted my issues on another thread here I then decided to encode the video on the client side and then send it to the server. I’ve found encoding to be a bit complicated and there isn’t much information available on how this can be done so I search for the only way I knew how to encode a video that is by using ffmpeg codec. I’ve found some solutions. There’s a project on github that demonstrates how ffmpeg is used inside a Xamarin android project. However running the solution doesn’t give any output. The project has a binary ffmpeg file which is installed to the phone directory using the code below :

    _ffmpegBin = InstallBinary(XamarinAndroidFFmpeg.Resource.Raw.ffmpeg, "ffmpeg", false);

    Below is the example code for encoding video into different set of outputs :

               _workingDirectory = Android.OS.Environment.ExternalStorageDirectory.AbsolutePath;
           var sourceMp4 = "cat1.mp4";
           var destinationPathAndFilename = System.IO.Path.Combine (_workingDirectory, "cat1_out.mp4");
           var destinationPathAndFilename2 = System.IO.Path.Combine (_workingDirectory, "cat1_out2.mp4");
           var destinationPathAndFilename4 = System.IO.Path.Combine (_workingDirectory, "cat1_out4.wav");
           if (File.Exists (destinationPathAndFilename))
               File.Delete (destinationPathAndFilename);
           CreateSampleFile(Resource.Raw.cat1, _workingDirectory, sourceMp4);


           var ffmpeg = new FFMpeg (this, _workingDirectory);

           var sourceClip = new Clip (System.IO.Path.Combine(_workingDirectory, sourceMp4));

           var result = ffmpeg.GetInfo (sourceClip);

           var br = System.Environment.NewLine;

           // There are callbacks based on Standard Output and Standard Error when ffmpeg binary is running as a process:

           var onComplete = new MyCommand ((_) => {
               RunOnUiThread(() =>_logView.Append("DONE!" + br + br));
           });

           var onMessage = new MyCommand ((message) => {
               RunOnUiThread(() =>_logView.Append(message + br + br));
           });

           var callbacks = new FFMpegCallbacks (onComplete, onMessage);

           // 1. The idea of this first test is to show that video editing is possible via FFmpeg:
           // It results in a 150x150 movie that eventually zooms on a cat ear. This is desaturated, and there's a fade in.

           var filters = new List<videofilter> ();
           filters.Add (new FadeVideoFilter ("in", 0, 100));
           filters.Add(new CropVideoFilter("150","150","0","0"));
           filters.Add(new ColorVideoFilter(1.0m, 1.0m, 0.0m, 0.5m, 1.0m, 1.0m, 1.0m, 1.0m));
           var outputClip = new Clip (destinationPathAndFilename) { videoFilter = VideoFilter.Build (filters)  };
           outputClip.H264_CRF = "18"; // It's the quality coefficient for H264 - Default is 28. I think 18 is pretty good.
           ffmpeg.ProcessVideo(sourceClip, outputClip, true, new FFMpegCallbacks(onComplete, onMessage));

           //2. This is a similar version version in command line only:
           string[] cmds = new string[] {
               "-y",
               "-i",
               sourceClip.path,
               "-strict",
               "-2",
               "-vf",
               "mp=eq2=1:1.68:0.3:1.25:1:0.96:1",
               destinationPathAndFilename2,
               "-acodec",
               "copy",
           };
           ffmpeg.Execute (cmds, callbacks);

           // 3. This lists codecs:
           string[] cmds3 = new string[] {
               "-codecs",
           };
           ffmpeg.Execute (cmds, callbacks);

           // 4. This convers to WAV
           // Note that the cat movie just has some silent house noise.
           ffmpeg.ConvertToWaveAudio(sourceClip, destinationPathAndFilename4, 44100, 2, callbacks, true);
    </videofilter>

    I have tried different commands but no output file is generated. I have tried to use another project found here but this one has the same issue. I don’t get any errors but no output file is generated. I’m really hoping someone can help me find a way I can manage to use ffmpeg in my project or some way to compress video to transport it to the server.

    I will really appreciate if someone can point me to the right direction.

  • How to stream H.264 bitstream to browser

    21 janvier 2019, par BobtheMagicMoose

    This is a followup to https://raspberrypi.stackexchange.com/questions/93254/stream-usb-webcam-with-audio?noredirect=1#comment150507_93254

    I, like many other brave tinkerers before me, thought it would be a simple task to take an old USB camera (c920) can pair it with a raspberry pi to make a network streaming device (e.g., baby monitor). As those that have gone before me, I have now realized (after two days of tearing my hair out), that this is an extremely complicated task.

    Problem statement : I have a raspberry pi zero and a c920 webcam. I want to use the H.264 bitstream from the webcam and serve it on the pi without transcoding it (the feeble processor would really struggle). I want to combine the video stream with its audio and send it over to a browser (phone, tablet, pc - something HTML5 without NAPI).

    My current strategy is to do the following :

    ffmpeg -re -f s16le -i /dev/zero -f v4l2 -thread_queue_size 512 -codec:v h264 -s 1920x1080 -i /dev/video0 -codec:v copy -acodec aac -ab 128k -g 50 http://localhost:8090/camera.ffm (this is with dummy audio - I figured I would add audio later)

    Followed by sudo ffserver -d -f /etc/ffserver.conf to received the feed and broadcast it as a stream. This is the ffserver.conf file :

    `HTTPPort 8090
    HTTPBindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 100000
    CustomLog -
    <feed>
     File /tmp/streamwebm.ffm
     FileMaxSize 50M
     ACL allow localhost
     ACL allow 128.199.149.46
     #ACL allow 127.0.0.1
     ACL allow 192.168.0.0 192.168.0.255
    </feed>
    <stream stream="stream">
    Format webm

    # Video Settings
    VideoFrameRate 30
    VideoSize 1920x1080

    # Audio settings
    AudioCodec libvorbis
    AudioSampleRate 48000
    AVOptionAudio flags +global_header

    MaxTime 0
    AVOptionVideo me_range 16
    AVOptionVideo qdiff 4
    AVOptionVideo qmin 4
    AVOptionVideo qmax 40
    #AVOptionVideo good
    AVOptionVideo flags +global_header

    # Streaming settings
    PreRoll 10
    StartSendOnKey

    Metadata author "author"
    Metadata copyright "copyright"
    Metadata title "Web app name"
    Metadata comment "comment"
    </stream>

    My basic html is<video>  <source src="http://localhost:8090/stream"> </source></video>

    The stream however, doesn’t work (the browser won’t connect) and I get the following :
    enter image description here

    And the browser on the client says (failed) NET::ERR_CONNECTION_REFUSED

    Thoughts :
    - Begin stream simple mp4 with ffserver explains that ffserver can’t stream .mp4 because of headers or something. This is why I am using webm (which doesn’t support h.264 I believe and is causing the really slow performance converting to vp9). I’m not concerned about CPU usage at the moment, just want to get an image to appear on the browser !

    • I hear one issue deals with ’chunking’ - that the camera h.264 is a bitstream but h.264 streams for html5 should be chunked. Not sure how that would work.

    • I have tried VLC for some things (RTP) but haven’t have success.

    • Most resources (SE and other sites) are from 2010-2015 and it seems as thought v4l2 and other things have developed since then.

    • As my problem is most likely general ignorance of the subject matter, I would appreciate any answers that provide some general understanding as to the theory behind different techniques. I know this makes the question more of a call for opinion and less appropriate for SE, but I’m fixing to throw my computer out the window (you know the feeling).

    Thank you !