Recherche avancée

Médias (91)

Autres articles (82)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (4839)

  • w32pthreads : Add pthread_once emulation

    7 octobre 2015, par Hendrik Leppkes
    w32pthreads : Add pthread_once emulation
    

    The emulation uses native InitOnce* APIs on Windows Vista+, and a
    lock-free/allocation-free approach using atomics and spinning for
    Windows XP.

    Signed-off-by : Luca Barbato <lu_zero@gentoo.org>

    • [DBH] compat/w32pthreads.h
  • Syncing Audio with Video iOS RTSP Player

    3 octobre 2015, par Dave Thomas

    I am combining two different classes from two different git projects to create an RTSP streamer for an iOS live streaming application.

    edit : Agree with the -1 this question is probably a shot in the dark. But, to answer the question if I am asked "why I am not using entirely the DFURTSPPlayer library ?" Because I would rather use the YUV display with opengl of the second project, hackcam, rather than decode the video frames into a UIImages like DFURTS does. Hackcam does not have audio

    Also please comment if you down vote, at least help me find an answer by telling me what I need to refine to be clear or point out if this question is inappropriate

    My current issue is that the audio playback has about a 1 second latency, and is out of sync with the video which is close to real time.

    I know that the audio is in sync because I’ve tested the RTSP streams in VLC. Something is wrong with my implementation. Mostly frankensteining these too projects together and the fact that I am not familiar with ffmpeg c library or AudioQueue for iOS.

    Any help would be greatly appreciated !

    I’ve taken the AudioStreamer class from this repository :
    https://github.com/durfu/DFURTSPPlayer

    https://github.com/durfu/DFURTSPPlayer/blob/master/DFURTSPPlayer/DFURTSPPlayer/FFMpegDecoder/AudioStreamer.m

    And I am trying to get it to work with this one :
    https://github.com/hackacam/ios_rtsp_player/blob/master/src/FfmpegWrapper.m

    I can post more code if needed, but my main loop in FfmpegWrapper now looks like this (_audioController is reference to AudioStreamer.m) :

    -(int) startDecodingWithCallbackBlock: (void (^) (AVFrameData *frame)) frameCallbackBlock
                         waitForConsumer: (BOOL) wait
                      completionCallback: (void (^)()) completion
    {
       OSMemoryBarrier();
       _stopDecode=false;
       dispatch_queue_t decodeQueue = dispatch_queue_create("decodeQueue", NULL);
       dispatch_async(decodeQueue, ^{
           int frameFinished;
           OSMemoryBarrier();
           while (self->_stopDecode==false){
               @autoreleasepool {
                   CFTimeInterval currentTime = CACurrentMediaTime();
                   if ((currentTime-_previousDecodedFrameTime) > MIN_FRAME_INTERVAL &amp;&amp;
                       av_read_frame(_formatCtx, &amp;_packetFFmpeg)>=0) {

                       _previousDecodedFrameTime = currentTime;
                       // Is this a packet from the video stream?
                       if(_packetFFmpeg.stream_index==_videoStream) {
                           // Decode video frame
                           avcodec_decode_video2(_codecCtx, _frame, &amp;frameFinished,
                                                 &amp;_packetFFmpeg);

                           // Did we get a video frame?
                           if(frameFinished) {
                               // create a frame object and call the block;
                               AVFrameData *frameData = [self createFrameData:_frame trimPadding:YES];
                               frameCallbackBlock(frameData);
                           }

                           // Free the packet that was allocated by av_read_frame
                           av_free_packet(&amp;_packetFFmpeg);

                       } else if (_packetFFmpeg.stream_index==audioStream) {

                           // NSLog(@"audio stream");
                           [audioPacketQueueLock lock];

                           audioPacketQueueSize += _packetFFmpeg.size;
                           [audioPacketQueue addObject:[NSMutableData dataWithBytes:&amp;_packetFFmpeg length:sizeof(_packetFFmpeg)]];

                           [audioPacketQueueLock unlock];

                           if (!primed) {
                               primed=YES;
                               [_audioController _startAudio];
                           }

                           if (emptyAudioBuffer) {
                               [_audioController enqueueBuffer:emptyAudioBuffer];
                           }

                           //av_free_packet(&amp;_packetFFmpeg);

                       } else {

                           // Free the packet that was allocated by av_read_frame
                           av_free_packet(&amp;_packetFFmpeg);
                       }


                   } else{
                       usleep(1000);
                   }
               }
           }
           completion();
       });
       return 0;
    }

    Enqueue Buffer in AudioStreamer :

    - (OSStatus)enqueueBuffer:(AudioQueueBufferRef)buffer
    {
       OSStatus status = noErr;

       if (buffer) {
           AudioTimeStamp bufferStartTime;
           buffer->mAudioDataByteSize = 0;
           buffer->mPacketDescriptionCount = 0;

           if (_streamer.audioPacketQueue.count &lt;= 0) {
               _streamer.emptyAudioBuffer = buffer;
               return status;
           }

           _streamer.emptyAudioBuffer = nil;

           while (_streamer.audioPacketQueue.count &amp;&amp; buffer->mPacketDescriptionCount &lt; buffer->mPacketDescriptionCapacity) {
               AVPacket *packet = [_streamer readPacket];

               if (buffer->mAudioDataBytesCapacity - buffer->mAudioDataByteSize >= packet->size) {
                   if (buffer->mPacketDescriptionCount == 0) {
                       bufferStartTime.mSampleTime = packet->dts * _audioCodecContext->frame_size;
                       bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
                   }

                   memcpy((uint8_t *)buffer->mAudioData + buffer->mAudioDataByteSize, packet->data, packet->size);
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = packet->size;
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _audioCodecContext->frame_size;

                   buffer->mAudioDataByteSize += packet->size;
                   buffer->mPacketDescriptionCount++;


                   _streamer.audioPacketQueueSize -= packet->size;

                   av_free_packet(packet);
               }
               else {

                   //av_free_packet(packet);
                   break;
               }
           }

           [decodeLock_ lock];
           if (buffer->mPacketDescriptionCount > 0) {
               status = AudioQueueEnqueueBuffer(audioQueue_, buffer, 0, NULL);
               if (status != noErr) {
                   NSLog(@"Could not enqueue buffer.");
               }
           } else {
               AudioQueueStop(audioQueue_, NO);
               finished_ = YES;
           }

           [decodeLock_ unlock];
       }

       return status;
    }

    Read packet in ffmpegwrapper :

    - (AVPacket*)readPacket
    {
       if (_currentPacket.size > 0 || _inBuffer) return &amp;_currentPacket;

       NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
       _packet = [packetData mutableBytes];

       if (_packet) {
           if (_packet->dts != AV_NOPTS_VALUE) {
               _packet->dts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
           }

           if (_packet->pts != AV_NOPTS_VALUE) {
               _packet->pts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
           }

           [audioPacketQueueLock lock];
           audioPacketQueueSize -= _packet->size;
           if ([audioPacketQueue count] > 0) {
               [audioPacketQueue removeObjectAtIndex:0];
           }
           [audioPacketQueueLock unlock];

           _currentPacket = *(_packet);
       }

       return &amp;_currentPacket;
    }
  • How can I encode a video to play on a DLink DSM-520 using FFMPEG ?

    4 octobre 2015, par tolsen64

    I have been searching, testing, and coming up with nothing for over a week. I want to use FFMPEG to convert mp4’s and mkv’s to AVI files that will play on my DLink DSM-520. Mencoder will do it. The files that FFMPEG generates cause the player to lock up less than a minute into the video. First, here’s what I use to encode the file using Mencoder (scraped from the test.bat file that PocketDIVXEncoder generates :

    mencoder.exe ftwd105.mp4 -af volnorm -srate 44100 -oac mp3lame -lameopts mode=0:cbr:br=128 -noodml -vf pp=ac,scale=720:404,crop=720:400,harddup -sws 9 -ovc lavc -lavcopts vcodec=mpeg4:mbd=1:last_pred=2:vstrict=1:threads=2:vmax_b_frames=0:vbitrate=1200 -ffourcc XVID -o ftwd105_HDTV.avi

    The output file plays perfectly on the DSM-520. Looking at the file using FFPROBE, I see this :

    Input #0, avi, from 'ftwd105_HDTV.avi':
     Metadata:
       encoder         : MEncoder Redxii-SVN-r37527-4.9.3 (x86_64)
     Duration: 00:44:32.96, start: 0.000000, bitrate: 1193 kb/s
       Stream #0:0: Video: mpeg4 (Simple Profile) (XVID / 0x44495658), yuv420p, 720x400 [SAR 1:1 DAR 9:5], 1053 kb/s, 23.98 fps, 23.98 tbr, 23.98 tbn, 24k tbc
       Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, stereo, s16p, 128 kb/s

    So now I try the same thing with FFMPEG.

    ffmpeg -i ftwd105.mp4 -vcodec mpeg4 -vtag XVID -b:v 1200k -s 720x400 -acodec libmp3lame -ab 128k -ar 44100 -ac 2 -f avi ftwd105_ffmpeg.avi

    This file does not play on the media player. It plays choppy with only clicking for sound for about 15-30 seconds, then it freezes. Looking at it with FFPROBE, it looks exactly the same as the one created by Mencoder.

    Input #0, avi, from 'ftwd105_ffmpeg.avi':
     Metadata:
       encoder         : Lavf57.0.100
     Duration: 00:44:33.14, start: 0.000000, bitrate: 1305 kb/s
       Stream #0:0: Video: mpeg4 (Simple Profile) (XVID / 0x44495658), yuv420p, 720x400 [SAR 1:1 DAR 9:5], 1165 kb/s, 23.98 fps, 23.98 tbr, 23.98 tbn, 24k tbc
       Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, stereo, s16p, 128 kb/s

    So now I encode the video using Xvid4PSP. It plays perfectly fine and FFPROBE shows this :

    Input #0, avi, from 'ftwd105_ps2.avi':
     Metadata:
       encoder         : VirtualDubMod 1.5.10.3 | www.virtualdub-fr.org || (build 2550/release)
     Duration: 00:44:33.09, start: 0.000000, bitrate: 861 kb/s
       Stream #0:0: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv420p, 720x400 [SAR 1:1 DAR 9:5], 723 kb/s, 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
       Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 128 kb/s

    It’s using Advanced Simple Profile so I look this up and change my FFMPEG options :

    ffmpeg -i ftwd105.mp4 -vcodec mpeg4 -vtag XVID -b:v 1200k -s 720x400 -profile:v 15 -level 0 -acodec libmp3lame -ab 128k -ar 44100 -ac 2 -f avi ftwd105_ffmpeg.avi

    But though the output file looks the same using FFPROBE as the one made by Xvid4PSP, it still doesn’t play on the DSM-520.

    Input #0, avi, from 'ftwd105_ffmpeg_asp.avi':
     Metadata:
       encoder         : Lavf57.0.100
     Duration: 00:44:33.14, start: 0.000000, bitrate: 1305 kb/s
       Stream #0:0: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv420p, 720x400 [SAR 1:1 DAR 9:5], 1165 kb/s, 23.98 fps, 23.98 tbr, 23.98 tbn, 24k tbc
       Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, stereo, s16p, 128 kb/s

    So now i’m at a loss. Is FFMPEG incapable of generating a file that the DSM-520 can play ? The reason I want to use FFMPEG over Mencoder is that it’s much faster. What takes FFMPEG 15 minutes takes Mencoder 40.

    I should note that all the files created by FFMPEG play fine on the PC and on my Visio television. The DSM-520 is hooked up to a bedroom tv that isn’t a smart tv.

    Edit : I also tried libxvid in place of mpeg4 with the same results.