Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (56)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (8526)

  • There is a video stream option and player. But can't find where to manage video and stream

    26 janvier 2018, par Chanaka De Silva

    I’m working on this repository. in this application, we can upload a video taken by a action camera or drone, and it uses GPS coordinates (lat,lang) and draw the path of the video in the map. And also we can play the video via the system too after upload.

    Bt the size of the video is too much high. So it take a lot of time to process video and also download and play when we host in a server. I wanted to reduce size. So I write this code.

    try {
    new ffmpeg('demo.mp4', function (err, video) {
       if (!err) {
           video
               .setVideoSize('50%', true, false)
               .setVideoFrameRate(30)
               .save('output.mp4', function (error, file) {
                   if (error) {
                       console.log("error : ", error);
                   }
                   if (!error)
                       console.log('Video file: ' + file);
               });
           console.log('The video is ready to be processed');
       } else {
           console.log('Error: ' + err);
       }
    });

    }

    But the issue is I can’t find where is the video coming from in this application. We need to pass like "demo.mp4" to my code as you can see. This is the index file if this application : https://github.com/chanakaDe/replay-lite/blob/master/frontend/index.html

    And this is the player file : https://github.com/chanakaDe/replay-lite/blob/master/frontend/player.js

    This is the file called api.js : https://github.com/chanakaDe/replay-lite/blob/master/frontend/api.js

    This file also has some video functions : https://github.com/chanakaDe/replay-lite/blob/master/gopro/index.js

    Please guys, sometimes you will see this questions as a silly question. But you have the repo to check and can you guys please check and let me know where to get a video file or stream to add my size reduce code ?

  • Why is live video stream not fluent while audio stream is normal when they are played by Flash RTMP Player after being encoded

    1er décembre 2015, par xiaolan

    My video stream is encoded with H.264, and audio stream is encoded with AAC. In fact, I get these streams by reading a file whose format is flv. I only decode video stream in order to get all video frames, then I do something by using ffmpeg before encoding them, such as change some pixels. At last I will push the video and audio stream to Crtmpserver. When I pull the live stream from this server, I find the video is not fluent but audio is normal. But when I change gop_size from 12 to 3, everything is OK. What reasons cause that problem, can anyone explain something to me ?

  • Syncing Audio with Video iOS RTSP Player

    3 octobre 2015, par Dave Thomas

    I am combining two different classes from two different git projects to create an RTSP streamer for an iOS live streaming application.

    edit : Agree with the -1 this question is probably a shot in the dark. But, to answer the question if I am asked "why I am not using entirely the DFURTSPPlayer library ?" Because I would rather use the YUV display with opengl of the second project, hackcam, rather than decode the video frames into a UIImages like DFURTS does. Hackcam does not have audio

    Also please comment if you down vote, at least help me find an answer by telling me what I need to refine to be clear or point out if this question is inappropriate

    My current issue is that the audio playback has about a 1 second latency, and is out of sync with the video which is close to real time.

    I know that the audio is in sync because I’ve tested the RTSP streams in VLC. Something is wrong with my implementation. Mostly frankensteining these too projects together and the fact that I am not familiar with ffmpeg c library or AudioQueue for iOS.

    Any help would be greatly appreciated !

    I’ve taken the AudioStreamer class from this repository :
    https://github.com/durfu/DFURTSPPlayer

    https://github.com/durfu/DFURTSPPlayer/blob/master/DFURTSPPlayer/DFURTSPPlayer/FFMpegDecoder/AudioStreamer.m

    And I am trying to get it to work with this one :
    https://github.com/hackacam/ios_rtsp_player/blob/master/src/FfmpegWrapper.m

    I can post more code if needed, but my main loop in FfmpegWrapper now looks like this (_audioController is reference to AudioStreamer.m) :

    -(int) startDecodingWithCallbackBlock: (void (^) (AVFrameData *frame)) frameCallbackBlock
                         waitForConsumer: (BOOL) wait
                      completionCallback: (void (^)()) completion
    {
       OSMemoryBarrier();
       _stopDecode=false;
       dispatch_queue_t decodeQueue = dispatch_queue_create("decodeQueue", NULL);
       dispatch_async(decodeQueue, ^{
           int frameFinished;
           OSMemoryBarrier();
           while (self->_stopDecode==false){
               @autoreleasepool {
                   CFTimeInterval currentTime = CACurrentMediaTime();
                   if ((currentTime-_previousDecodedFrameTime) > MIN_FRAME_INTERVAL &&
                       av_read_frame(_formatCtx, &_packetFFmpeg)>=0) {

                       _previousDecodedFrameTime = currentTime;
                       // Is this a packet from the video stream?
                       if(_packetFFmpeg.stream_index==_videoStream) {
                           // Decode video frame
                           avcodec_decode_video2(_codecCtx, _frame, &frameFinished,
                                                 &_packetFFmpeg);

                           // Did we get a video frame?
                           if(frameFinished) {
                               // create a frame object and call the block;
                               AVFrameData *frameData = [self createFrameData:_frame trimPadding:YES];
                               frameCallbackBlock(frameData);
                           }

                           // Free the packet that was allocated by av_read_frame
                           av_free_packet(&_packetFFmpeg);

                       } else if (_packetFFmpeg.stream_index==audioStream) {

                           // NSLog(@"audio stream");
                           [audioPacketQueueLock lock];

                           audioPacketQueueSize += _packetFFmpeg.size;
                           [audioPacketQueue addObject:[NSMutableData dataWithBytes:&_packetFFmpeg length:sizeof(_packetFFmpeg)]];

                           [audioPacketQueueLock unlock];

                           if (!primed) {
                               primed=YES;
                               [_audioController _startAudio];
                           }

                           if (emptyAudioBuffer) {
                               [_audioController enqueueBuffer:emptyAudioBuffer];
                           }

                           //av_free_packet(&_packetFFmpeg);

                       } else {

                           // Free the packet that was allocated by av_read_frame
                           av_free_packet(&_packetFFmpeg);
                       }


                   } else{
                       usleep(1000);
                   }
               }
           }
           completion();
       });
       return 0;
    }

    Enqueue Buffer in AudioStreamer :

    - (OSStatus)enqueueBuffer:(AudioQueueBufferRef)buffer
    {
       OSStatus status = noErr;

       if (buffer) {
           AudioTimeStamp bufferStartTime;
           buffer->mAudioDataByteSize = 0;
           buffer->mPacketDescriptionCount = 0;

           if (_streamer.audioPacketQueue.count <= 0) {
               _streamer.emptyAudioBuffer = buffer;
               return status;
           }

           _streamer.emptyAudioBuffer = nil;

           while (_streamer.audioPacketQueue.count && buffer->mPacketDescriptionCount < buffer->mPacketDescriptionCapacity) {
               AVPacket *packet = [_streamer readPacket];

               if (buffer->mAudioDataBytesCapacity - buffer->mAudioDataByteSize >= packet->size) {
                   if (buffer->mPacketDescriptionCount == 0) {
                       bufferStartTime.mSampleTime = packet->dts * _audioCodecContext->frame_size;
                       bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
                   }

                   memcpy((uint8_t *)buffer->mAudioData + buffer->mAudioDataByteSize, packet->data, packet->size);
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = packet->size;
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _audioCodecContext->frame_size;

                   buffer->mAudioDataByteSize += packet->size;
                   buffer->mPacketDescriptionCount++;


                   _streamer.audioPacketQueueSize -= packet->size;

                   av_free_packet(packet);
               }
               else {

                   //av_free_packet(packet);
                   break;
               }
           }

           [decodeLock_ lock];
           if (buffer->mPacketDescriptionCount > 0) {
               status = AudioQueueEnqueueBuffer(audioQueue_, buffer, 0, NULL);
               if (status != noErr) {
                   NSLog(@"Could not enqueue buffer.");
               }
           } else {
               AudioQueueStop(audioQueue_, NO);
               finished_ = YES;
           }

           [decodeLock_ unlock];
       }

       return status;
    }

    Read packet in ffmpegwrapper :

    - (AVPacket*)readPacket
    {
       if (_currentPacket.size > 0 || _inBuffer) return &_currentPacket;

       NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
       _packet = [packetData mutableBytes];

       if (_packet) {
           if (_packet->dts != AV_NOPTS_VALUE) {
               _packet->dts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
           }

           if (_packet->pts != AV_NOPTS_VALUE) {
               _packet->pts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
           }

           [audioPacketQueueLock lock];
           audioPacketQueueSize -= _packet->size;
           if ([audioPacketQueue count] > 0) {
               [audioPacketQueue removeObjectAtIndex:0];
           }
           [audioPacketQueueLock unlock];

           _currentPacket = *(_packet);
       }

       return &_currentPacket;
    }