
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (97)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)
Sur d’autres sites (5818)
-
avfilter/vf_delogo : round to the closest value
9 décembre 2015, par Jean Delvareavfilter/vf_delogo : round to the closest value
When the interpolated value is divided by the sum of weights, no
rounding is done, which means the value is truncated. This results in
a slight bias towards dark green in the interpolated area. Rounding
properly removes the bias.I measured this change to reduce the interpolation error by 1 to 2 %
on average on a number of sample input and logo area combinations.Signed-off-by : Jean Delvare <jdelvare@suse.de>
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc> -
Syncing Audio with Video iOS RTSP Player
3 octobre 2015, par Dave ThomasI am combining two different classes from two different git projects to create an RTSP streamer for an iOS live streaming application.
edit : Agree with the -1 this question is probably a shot in the dark. But, to answer the question if I am asked "why I am not using entirely the DFURTSPPlayer library ?" Because I would rather use the YUV display with opengl of the second project, hackcam, rather than decode the video frames into a UIImages like DFURTS does. Hackcam does not have audio
Also please comment if you down vote, at least help me find an answer by telling me what I need to refine to be clear or point out if this question is inappropriate
My current issue is that the audio playback has about a 1 second latency, and is out of sync with the video which is close to real time.
I know that the audio is in sync because I’ve tested the RTSP streams in VLC. Something is wrong with my implementation. Mostly frankensteining these too projects together and the fact that I am not familiar with ffmpeg c library or AudioQueue for iOS.
Any help would be greatly appreciated !
I’ve taken the AudioStreamer class from this repository :
https://github.com/durfu/DFURTSPPlayerAnd I am trying to get it to work with this one :
https://github.com/hackacam/ios_rtsp_player/blob/master/src/FfmpegWrapper.mI can post more code if needed, but my main loop in FfmpegWrapper now looks like this (_audioController is reference to AudioStreamer.m) :
-(int) startDecodingWithCallbackBlock: (void (^) (AVFrameData *frame)) frameCallbackBlock
waitForConsumer: (BOOL) wait
completionCallback: (void (^)()) completion
{
OSMemoryBarrier();
_stopDecode=false;
dispatch_queue_t decodeQueue = dispatch_queue_create("decodeQueue", NULL);
dispatch_async(decodeQueue, ^{
int frameFinished;
OSMemoryBarrier();
while (self->_stopDecode==false){
@autoreleasepool {
CFTimeInterval currentTime = CACurrentMediaTime();
if ((currentTime-_previousDecodedFrameTime) > MIN_FRAME_INTERVAL &&
av_read_frame(_formatCtx, &_packetFFmpeg)>=0) {
_previousDecodedFrameTime = currentTime;
// Is this a packet from the video stream?
if(_packetFFmpeg.stream_index==_videoStream) {
// Decode video frame
avcodec_decode_video2(_codecCtx, _frame, &frameFinished,
&_packetFFmpeg);
// Did we get a video frame?
if(frameFinished) {
// create a frame object and call the block;
AVFrameData *frameData = [self createFrameData:_frame trimPadding:YES];
frameCallbackBlock(frameData);
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&_packetFFmpeg);
} else if (_packetFFmpeg.stream_index==audioStream) {
// NSLog(@"audio stream");
[audioPacketQueueLock lock];
audioPacketQueueSize += _packetFFmpeg.size;
[audioPacketQueue addObject:[NSMutableData dataWithBytes:&_packetFFmpeg length:sizeof(_packetFFmpeg)]];
[audioPacketQueueLock unlock];
if (!primed) {
primed=YES;
[_audioController _startAudio];
}
if (emptyAudioBuffer) {
[_audioController enqueueBuffer:emptyAudioBuffer];
}
//av_free_packet(&_packetFFmpeg);
} else {
// Free the packet that was allocated by av_read_frame
av_free_packet(&_packetFFmpeg);
}
} else{
usleep(1000);
}
}
}
completion();
});
return 0;
}Enqueue Buffer in AudioStreamer :
- (OSStatus)enqueueBuffer:(AudioQueueBufferRef)buffer
{
OSStatus status = noErr;
if (buffer) {
AudioTimeStamp bufferStartTime;
buffer->mAudioDataByteSize = 0;
buffer->mPacketDescriptionCount = 0;
if (_streamer.audioPacketQueue.count <= 0) {
_streamer.emptyAudioBuffer = buffer;
return status;
}
_streamer.emptyAudioBuffer = nil;
while (_streamer.audioPacketQueue.count && buffer->mPacketDescriptionCount < buffer->mPacketDescriptionCapacity) {
AVPacket *packet = [_streamer readPacket];
if (buffer->mAudioDataBytesCapacity - buffer->mAudioDataByteSize >= packet->size) {
if (buffer->mPacketDescriptionCount == 0) {
bufferStartTime.mSampleTime = packet->dts * _audioCodecContext->frame_size;
bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
}
memcpy((uint8_t *)buffer->mAudioData + buffer->mAudioDataByteSize, packet->data, packet->size);
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = packet->size;
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _audioCodecContext->frame_size;
buffer->mAudioDataByteSize += packet->size;
buffer->mPacketDescriptionCount++;
_streamer.audioPacketQueueSize -= packet->size;
av_free_packet(packet);
}
else {
//av_free_packet(packet);
break;
}
}
[decodeLock_ lock];
if (buffer->mPacketDescriptionCount > 0) {
status = AudioQueueEnqueueBuffer(audioQueue_, buffer, 0, NULL);
if (status != noErr) {
NSLog(@"Could not enqueue buffer.");
}
} else {
AudioQueueStop(audioQueue_, NO);
finished_ = YES;
}
[decodeLock_ unlock];
}
return status;
}Read packet in ffmpegwrapper :
- (AVPacket*)readPacket
{
if (_currentPacket.size > 0 || _inBuffer) return &_currentPacket;
NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
_packet = [packetData mutableBytes];
if (_packet) {
if (_packet->dts != AV_NOPTS_VALUE) {
_packet->dts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
}
if (_packet->pts != AV_NOPTS_VALUE) {
_packet->pts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
}
[audioPacketQueueLock lock];
audioPacketQueueSize -= _packet->size;
if ([audioPacketQueue count] > 0) {
[audioPacketQueue removeObjectAtIndex:0];
}
[audioPacketQueueLock unlock];
_currentPacket = *(_packet);
}
return &_currentPacket;
} -
ffmpeg streaming camera with directshow
17 novembre 2015, par atu0830I am trying use ffmpeg to streaming one camera. The command is
ffmpeg.exe -y -f dshow -i video="AmCam" -c:v copy -framerate 7.5 -map 0:0 -f ssegment -segment_time 4 -segment_format mpegts -segment_list "web\stream.m3u8" -segment_list_size 720 -segment_list_flags live -segment_wrap 10 -segment_list_type m3u8 "web\segments\s%%d.ts"
And I create a html in web folder
<video controls="controls" width="720" height="405" autoplay="autoplay">
<source src="stream.m3u8" type="application/x-mpegURL"></source>
</video>
All ts file generated but looking Safari on iPad looding but it always show dark player and loading