
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (103)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)
31 mai 2013, parLorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
Description des scripts
Trois scripts Munin ont été développés :
1. mediaspip_medias
Un script de (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (10148)
-
How to find the memory leak in ios xcode ?
12 août 2014, par Ajin ChackoIt’s my RTSP streaming ios application with FFMPEG decoder and it streaming fine, But the memory continuously increasing while running. Please help me, Is it a memory leak ?. And how can I track the leak ?.
Its my video streaming class : RTSPPlayer.m
#import "RTSPPlayer.h"
#import "Utilities.h"
#import "AudioStreamer.h"
@interface RTSPPlayer ()
@property (nonatomic, retain) AudioStreamer *audioController;
@end
@interface RTSPPlayer (private)
-(void)convertFrameToRGB;
-(UIImage *)imageFromAVPicture:(AVPicture)pict width:(int)width height:(int)height;
-(void)setupScaler;
@end
@implementation RTSPPlayer
@synthesize audioController = _audioController;
@synthesize audioPacketQueue,audioPacketQueueSize;
@synthesize _audioStream,_audioCodecContext;
@synthesize emptyAudioBuffer;
@synthesize outputWidth, outputHeight;
- (void)setOutputWidth:(int)newValue
{
if (outputWidth != newValue) {
outputWidth = newValue;
[self setupScaler];
}
}
- (void)setOutputHeight:(int)newValue
{
if (outputHeight != newValue) {
outputHeight = newValue;
[self setupScaler];
}
}
- (UIImage *)currentImage
{
if (!pFrame->data[0]) return nil;
[self convertFrameToRGB];
return [self imageFromAVPicture:picture width:outputWidth height:outputHeight];
}
- (double)duration
{
return (double)pFormatCtx->duration / AV_TIME_BASE;
}
- (double)currentTime
{
AVRational timeBase = pFormatCtx->streams[videoStream]->time_base;
return packet.pts * (double)timeBase.num / timeBase.den;
}
- (int)sourceWidth
{
return pCodecCtx->width;
}
- (int)sourceHeight
{
return pCodecCtx->height;
}
- (id)initWithVideo:(NSString *)moviePath usesTcp:(BOOL)usesTcp
{
if (!(self=[super init])) return nil;
AVCodec *pCodec;
// Register all formats and codecs
avcodec_register_all();
av_register_all();
avformat_network_init();
// Set the RTSP Options
AVDictionary *opts = 0;
if (usesTcp)
av_dict_set(&opts, "rtsp_transport", "tcp", 0);
if (avformat_open_input(&pFormatCtx, [moviePath UTF8String], NULL, &opts) !=0 ) {
av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
goto initError;
}
// Retrieve stream information
if (avformat_find_stream_info(pFormatCtx,NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Couldn't find stream information\n");
goto initError;
}
// Find the first video stream
videoStream=-1;
audioStream=-1;
for (int i=0; inb_streams; i++) {
if (pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
NSLog(@"found video stream");
videoStream=i;
}
if (pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO) {
audioStream=i;
NSLog(@"found audio stream");
}
}
if (videoStream==-1 && audioStream==-1) {
goto initError;
}
// Get a pointer to the codec context for the video stream
pCodecCtx = pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (pCodec == NULL) {
av_log(NULL, AV_LOG_ERROR, "Unsupported codec!\n");
goto initError;
}
// Open codec
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
goto initError;
}
if (audioStream > -1 ) {
NSLog(@"set up audiodecoder");
[self setupAudioDecoder];
}
// Allocate video frame
pFrame = avcodec_alloc_frame();
outputWidth = pCodecCtx->width;
self.outputHeight = pCodecCtx->height;
return self;
initError:
// [self release];
return nil;
}
- (void)setupScaler
{
// Release old picture and scaler
avpicture_free(&picture);
sws_freeContext(img_convert_ctx);
// Allocate RGB picture
avpicture_alloc(&picture, PIX_FMT_RGB24, outputWidth, outputHeight);
// Setup scaler
static int sws_flags = SWS_FAST_BILINEAR;
img_convert_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
outputWidth,
outputHeight,
PIX_FMT_RGB24,
sws_flags, NULL, NULL, NULL);
}
- (void)seekTime:(double)seconds
{
AVRational timeBase = pFormatCtx->streams[videoStream]->time_base;
int64_t targetFrame = (int64_t)((double)timeBase.den / timeBase.num * seconds);
avformat_seek_file(pFormatCtx, videoStream, targetFrame, targetFrame, targetFrame, AVSEEK_FLAG_FRAME);
avcodec_flush_buffers(pCodecCtx);
}
- (void)dealloc
{
// Free scaler
sws_freeContext(img_convert_ctx);
// Free RGB picture
avpicture_free(&picture);
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
// Free the YUV frame
av_free(pFrame);
// Close the codec
if (pCodecCtx) avcodec_close(pCodecCtx);
// Close the video file
if (pFormatCtx) avformat_close_input(&pFormatCtx);
[_audioController _stopAudio];
// [_audioController release];
_audioController = nil;
// [audioPacketQueue release];
audioPacketQueue = nil;
// [audioPacketQueueLock release];
audioPacketQueueLock = nil;
// [super dealloc];
}
- (BOOL)stepFrame
{
// AVPacket packet;
int frameFinished=0;
while (!frameFinished && av_read_frame(pFormatCtx, &packet) >=0 ) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
}
if (packet.stream_index==audioStream) {
// NSLog(@"audio stream");
[audioPacketQueueLock lock];
audioPacketQueueSize += packet.size;
[audioPacketQueue addObject:[NSMutableData dataWithBytes:&packet length:sizeof(packet)]];
[audioPacketQueueLock unlock];
if (!primed) {
primed=YES;
[_audioController _startAudio];
}
if (emptyAudioBuffer) {
[_audioController enqueueBuffer:emptyAudioBuffer];
}
}
}
return frameFinished!=0;
}
- (void)convertFrameToRGB
{
sws_scale(img_convert_ctx,
pFrame->data,
pFrame->linesize,
0,
pCodecCtx->height,
picture.data,
picture.linesize);
}
- (UIImage *)imageFromAVPicture:(AVPicture)pict width:(int)width height:(int)height
{
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CFDataRef data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, pict.data[0], pict.linesize[0]*height,kCFAllocatorNull);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImage = CGImageCreate(width,
height,
8,
24,
pict.linesize[0],
colorSpace,
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGDataProviderRelease(provider);
CFRelease(data);
return image;
}
- (void)setupAudioDecoder
{
if (audioStream >= 0) {
_audioBufferSize = AVCODEC_MAX_AUDIO_FRAME_SIZE;
_audioBuffer = av_malloc(_audioBufferSize);
_inBuffer = NO;
_audioCodecContext = pFormatCtx->streams[audioStream]->codec;
_audioStream = pFormatCtx->streams[audioStream];
AVCodec *codec = avcodec_find_decoder(_audioCodecContext->codec_id);
if (codec == NULL) {
NSLog(@"Not found audio codec.");
return;
}
if (avcodec_open2(_audioCodecContext, codec, NULL) < 0) {
NSLog(@"Could not open audio codec.");
return;
}
if (audioPacketQueue) {
// [audioPacketQueue release];
audioPacketQueue = nil;
}
audioPacketQueue = [[NSMutableArray alloc] init];
if (audioPacketQueueLock) {
// [audioPacketQueueLock release];
audioPacketQueueLock = nil;
}
audioPacketQueueLock = [[NSLock alloc] init];
if (_audioController) {
[_audioController _stopAudio];
// [_audioController release];
_audioController = nil;
}
_audioController = [[AudioStreamer alloc] initWithStreamer:self];
} else {
pFormatCtx->streams[audioStream]->discard = AVDISCARD_ALL;
audioStream = -1;
}
}
- (void)nextPacket
{
_inBuffer = NO;
}
- (AVPacket*)readPacket
{
if (_currentPacket.size > 0 || _inBuffer) return &_currentPacket;
NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
_packet = [packetData mutableBytes];
if (_packet) {
if (_packet->dts != AV_NOPTS_VALUE) {
_packet->dts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
}
if (_packet->pts != AV_NOPTS_VALUE) {
_packet->pts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
}
[audioPacketQueueLock lock];
audioPacketQueueSize -= _packet->size;
if ([audioPacketQueue count] > 0) {
[audioPacketQueue removeObjectAtIndex:0];
}
[audioPacketQueueLock unlock];
_currentPacket = *(_packet);
}
return &_currentPacket;
}
- (void)closeAudio
{
[_audioController _stopAudio];
primed=NO;
}
@end -
Catching dropped FFMPEG frames in iOS ?
30 juin 2020, par jbeu425I'm developing an iOS app which streams RTSP over a WiFi connection. I need to be able to inform the user if the connection is not good, and I'd like to know if there is some way to catch dropped FFMPEG frames in some way ?


My frames are decoded using the method below, is there somewhere here I can tap into to catch an issue with the frame being decoded ? There is currently an NSLog which says "decode video error !" about half way down this method, but running network link conditioner set to "very poor network" never reaches this line even though the stream is very choppy.


All I'm after really is some sort of line that I can breakpoint when the stream loses quality because of a bad connection.


- (NSArray *) decodeFrames: (CGFloat) minDuration
{
NSMutableArray *result = [NSMutableArray array];

@synchronized (lock) {
 
 if([_reading integerValue] != 1){
 
 _reading = [NSNumber numberWithInt:1];
 
 @synchronized (_seekPosition) {
 if([_seekPosition integerValue] != -1 && _seekPosition){
 [self seekDecoder:[_seekPosition longLongValue]];
 _seekPosition = [NSNumber numberWithInt:-1];
 }
 }
 
 if (_videoStream == -1 &&
 _audioStream == -1)
 return nil;
 
 AVPacket packet;
 
 CGFloat decodedDuration = 0;
 
 CGFloat totalDuration = [TimeHelper calculateTimeDifference];
 CGFloat timeStampAtEntry = av_frame_get_best_effort_timestamp(_videoFrame) * _videoTimeBase;
 NSDate *timeAtEntry = [NSDate date];
 BOOL catchUp = totalDuration > 0;
 do {
 BOOL finished = NO;
 
 while (!finished) {
 
 NSDate* snap1 = [NSDate date];
 if (av_read_frame(_formatCtx, &packet) < 0) {
 _isEOF = YES;
 catchUp = NO;
 [self endOfFileReached];
 break;
 }
 
 [self frameRead];
 
 if ([[NSDate date] timeIntervalSinceDate:snap1] > 3) {
 _isEOF = YES;
 catchUp = NO;
 [self endOfFileReached];
 break;
 }
 if (packet.stream_index ==_videoStream) {
 
 int pktSize = packet.size;
 

 while (pktSize > 0) {
 
 int gotframe = 0;
 int len = avcodec_decode_video2(_videoCodecCtx,
 _videoFrame,
 &gotframe,
 &packet);
 
 if (len < 0) {
 LoggerVideo(0, @"decode video error, skip packet");
 NSLog(@"decode video error!");
 break;
 }
 
 if (gotframe) {
 
 if (catchUp ) {
 CGFloat timeStamp = av_frame_get_best_effort_timestamp(_videoFrame) * _videoTimeBase;
 catchUp = totalDuration > (timeStamp - timeStampAtEntry);
 if (!catchUp) {
 CGFloat nextTotalDuration = [[NSDate date] timeIntervalSinceDate:timeAtEntry];
 if (nextTotalDuration < totalDuration && nextTotalDuration > 0.5) {
 totalDuration = nextTotalDuration;
 timeAtEntry = [NSDate date];
 timeStampAtEntry = timeStamp; 
 catchUp = true;
 }
 }
 if ([configuration.recordStream intValue] == 1) {
 [self writeToFile: packet];
 }
 
 } else {
 
 if (!_disableDeinterlacing &&
 _videoFrame->interlaced_frame) {
 
 avpicture_deinterlace((AVPicture*)_videoFrame,
 (AVPicture*)_videoFrame,
 _videoCodecCtx->pix_fmt,
 _videoCodecCtx->width,
 _videoCodecCtx->height);
 }
 
 KxVideoFrame *frame = [self handleVideoFrame];
 if (frame) {
 
 [result addObject:frame];
 _position = frame.position;
 decodedDuration += frame.duration;
 if (decodedDuration > minDuration)
 finished = YES;
 }
 

 if ([configuration.recordStream intValue] == 1) {
 [self writeToFile: packet];
 }
 
 
 
 }
 }
 
 if (0 == len)
 break;
 
 pktSize -= len;
 }
 }
 
 
 
 
 av_free_packet(&packet);
 }
 } while (catchUp);
 _reading = [NSNumber numberWithInt:0];
 [TimeHelper resetTimeEnteredForeground];
 [TimeHelper resetTimeEnteredBackground];
 
 return result;
 }
 
 }

 return result;

 }



-
SNES Hardware Compression
16 juin 2011, par Multimedia Mike — Game HackingI was browsing the source code for some Super Nintendo Entertainment System (SNES) emulators recently. I learned some interesting things about compression hardware. I had previously uncovered one compression algorithm used in an SNES title but that was implemented in software.
SNES game cartridges — being all hardware — were at liberty to expand the hardware capabilities of the base system by adding new processors. The most well-known of these processors was the Super FX which allows for basic polygon graphical rendering, powering such games as Star Fox. It was by no means the only such add-on processor, though. Here is a Wikipedia page of all the enhancement chips used in assorted SNES games. A number of them mention compression and so I delved into the emulators to find the details :
- The Super FX is listed in Wikipedia vaguely as being able to decompress graphics. I see no reference to decompression in emulator source code.
- DSP-3 emulation source code makes reference to LZ-type compression as well as tree/symbol decoding. I’m not sure if the latter is a component of the former. Wikipedia lists the chip as supporting "Shannon-Fano bitstream decompression."
- Similar to Super FX, the SA-1 chip is listed in Wikipedia as having some compression capabilities. Again, either that’s not true or none of the games that use the chip (notably Super Mario RPG) make use of the feature.
- The S-DD1 chip uses arithmetic and Golomb encoding for compressing graphics. Wikipedia refers to this as the ABS Lossless Entropy Algorithm. Googling for further details on that algorithm name yields no results, but I suspect it’s unrelated to anti-lock brakes. The algorithm is alleged to allow Star Ocean to smash 13 MB of graphics into a 4 MB cartridge ROM (largest size of an SNES cartridge).
- The SPC7110 can decompress data using a combination of arithmetic coding and Z-curve/Morton curve reordering.
No, I don’t plan to implement codecs for these schemes. But it’s always comforting to know that I could.
Not directly a compression scheme, but still a curious item is the MSU1 concept put forth by the bsnes emulator. This is a hypothetical coprocessor implemented by bsnes that gives an emulated cartridge access to a 4 GB address space. What to do with all this space ? Allow for the playback of uncompressed PCM audio as well as uncompressed video at 240x144x256 colors @ 30 fps. According to the docs and the source code, the latter feature doesn’t appear to be implemented, though ; only the raw PCM playback.