
Recherche avancée
Autres articles (96)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (10165)
-
Alternative to ffmpeg for dynamically creating video thumbnails [closed]
4 juillet 2021, par Daniel RusevThe server hosting my website doesn't have ffmpeg and I am not allowed to install any additional extensions. Is there any other way I can make video thumbnails dynamically ? Perhaps some kind of web service, where I pass the video file and as a result I get a picture file. I'm using php, by the way.


-
Convert ffmpeg yuv420p AVFrame to CMSampleBufferRef (kcvPixelFormatType_420YpCbCr8BiPlanarFullRange)
21 juillet 2014, par user3272750I have a foscam ip camera and have access to the rtsp stream. I used DFURTSPPlayer to view the stream on my iOS device which works fine. I use a webrtc provider that lets me inject frames as CMSampleBufferRef in addition to directly reading from any of the on board cameras. I wish to use this to broadcast the IP camera stream over a secure webrtc session.
The main loop in the DFURTSPPLayer checks if a frame is available and then converts into UIimage and sets it to an imageview.
-(void)displayNextFrame:(NSTimer *)timer
{
NSTimeInterval startTime = [NSDate timeIntervalSinceReferenceDate];
if (![video stepFrame]) {
[timer invalidate];
[playButton setEnabled:YES];
[video closeAudio];
return;
}
imageView.image = video.currentImage;
float frameTime = 1.0/([NSDate timeIntervalSinceReferenceDate]-startTime);
if (lastFrameTime<0) {
lastFrameTime = frameTime;
} else {
lastFrameTime = LERP(frameTime, lastFrameTime, 0.8);
}
[label setText:[NSString stringWithFormat:@"%.0f",lastFrameTime]];
}I’m trying to do something similar, but instead of (or in addition to) setting the UIImage I wish to also inject the frames into my webrtc service. This is an example where they use an avcapturesession. I believe I could do something similar to the runloop here and inject the frame (provided I can convert the yuv420p AVFrame into a CMSampleBufferRef :
- (void) captureOutput:(AVCaptureOutput*) captureOutput
didOutputSampleBuffer:(CMSampleBufferRef) sampleBuffer
fromConnection:(AVCaptureConnection*) connection
{
self.videoFrame.frameBuffer = sampleBuffer;
// IMPORTANT: injectFrame expects a 420YpCbCr8BiPlanarFullRange and frame
// gets timestamped inside the service.
NSLog(@"videoframe buffer %@",self.videoFrame.frameBuffer);
[self.service injectFrame:self.videoFrame];
}Hence my question. Most of the questiosn on stack overflow involve going in the other direction (typically broadcasting on board camera input via rtsp). I’m a n00b as far as avfoundation/corevideo is concerned. I’m prepared to put in the groundwork if someone can suggest a path. Thanks in advance !
Edit : After reading some more on this, it seems that most important step is a conversion from 420p to 420f.
-
Seek in fragmented MP4
15 novembre 2020, par Stefan FalkFor my web-client I want the user to be able to play a track right away, without having to download the entire file. For this I am using a fragmented MP4 with the AAC audio coded (Mime-Type :
audio/mp4; codecs="mp4a.40.2"
).

This is the command that is being used in order to convert an input file to a fMP4 :


ffmpeg -i /tmp/input.any \
 -f mp4 \
 -movflags faststart+separate_moof+empty_moov+default_base_moof \
 -acodec aac -b:a 256000 \
 -frag_duration 500K \
 /tmp/output.mp4



If I look at this file on MP4Box.js, I see that the file is fragmented like this :


ftyp
moov
moof
mdat
moof
mdat
..
moof
mdat
mfra



This looks alright so far but the problem I am facing now is that it's not apparent to me how to start loading data from a specific timestamp without introducing an additional overhead. What I mean by this is that I need the exact byte offset of the first
[moof][mdat]
for a specific timestamp without the entire file being available.

Let's say I have a file that looks like this :


ftyp
moov
moof # 00:00
mdat 
moof # 00:01
mdat
moof # 00:02
mdat
moof # 00:03
mdat
mfra



This file however, is not available on my server directly, it is being loaded from another service, but the client wants to request packets starting at
00:02
.

Is there a way to do this efficiently without me having to load the entire file from the other service to my server ?


My guess would be to load
[ftyp][moov]
(or store at least this part on my own server) but as far as I know, the metadata stored in those boxes won't help me to find a byte-offset to the first[moof][mdat]
-pair.

Is this even possible or am I following the wrong approach here ?