
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (71)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (13281)
-
How to display a stream of mdat/moof boxes in VLC ? [closed]
8 juillet 2024, par roacsI am trying to display a real-time video stream in VLC. The snag is that the real-time video that is being received is a stream of just the mdat and moof boxes of a fragmented MP4 file that is being recorded elsewhere. The initialization information (ftyp/moov) is not and will never be available in the real-time stream. There is also no audio.


I have access to initialization information (ftyp/moov) of a previously completed file and can use that to aid in the processing/streaming of the real-time mdat/moof boxes.


I am currently extracting the contents of the mdat box, splitting those up and packaging them in 188 byte MPEG-TS packets and multicasting them for VLC to pick up. And just as a shot in the dark, every 50 mdat's I am also packaging the SPS and PPS NALUs from the initialization information of the completed file and multicasting those in one MPEG-TS packet.


Input looks like this :


- 

- ...
- mdat 1
- moof 1
- mdat 2
- moof 2
- ...
- mdat N
- moof N
- ...




















And my output looks like this :


- 

- ...
- MPEG-TS 1 containing first 184 bytes of mdat 1
- MPEG-TS 2 containing next 184 bytes of mdat 1
- ...
- MPEG-TS N containing last 184 bytes of mdat 1
- MPEG-TS N+1 containing first 184 bytes of mdat 2
- MPEG-TS N+2 containing next 184 bytes of mdat 2
- ...
- MPEG-TS N+M containing last 184 bytes of mdat 2
- ...
- MPEG-TS containing SPS and PPS NALU
- ...


























VLC gets the data but no video playback.


How do I process this input in order to get it to play in VLC ?


-
I can't start dream recording ? [closed]
5 mars 2024, par usny1986I'm looking for an app to record my dreams so i can watch them later. (I want it to record speech sounds too). The apps i've tried so far haven't worked properly.


When I start to see my dream, the applications usually give an error that ffmpeg is not installed. Yes, this error doesn't occur if I dream in the dark at night. But the problem is that i sleep during the day, so i dream during the day. Even though I installed the latest version of FFmpeg, I was not successful.


Is there a successful app to record daytime dreams ?


I tried the "HD Dream Captura" application. But it didn't work. It doesn't work during the day. It also gives a memory full error and is constantly interrupted. But I have enough free space in my brain to be able to see my dreams.


-
Is it possible to use the signalstats YAVG filter in FFMPEG to control the gamma correction in a video ? [migrated]
27 décembre 2023, par tuquequeI don't know if this is even possible, but something tells me it is… You see, I want to dynamically control the gamma correction in FFMPEG/FFPLAY depending on how bright/dark the frame being reproduced in a video/movie is…


For context, there's a filter in FFMPEG/FFPLAY/FFPROBE called
signalstats
that reports various useful sets of info about the video being processed/reproduced/analized. There's aYAVG
parameter in thesignalstats
filter that returns the averageluma
level of the frame (https://ffmpeg.org/ffmpeg-filters.html#signalstats-1) . This is what I want to use to determine/calculate thegamma
value to use in the eq filtergamma
(https://ffmpeg.org/ffmpeg-filters.html#eq)… For example, in the end, I would like to use a formula like"1-(YAVG/50)+1"
…

With the
signalstats YAVG
filter/parameter I've managed to do these unrelated exercises :

Analyze (with FFPROBE) a video and write a log file with the
YAVG
value for each frame :

ffprobe -f lavfi movie=VIDEO_INPUT,signalstats -show_entries frame_tags=lavfi.signalstats.YAVG -of csv="p=0" > YAVG.log



I have been able to also play a video (with FFPLAY) and show in the upper-left corner the
YAVG
values for each frame being reproduced :

First, we need to create a text file in the home folder called in this case "
signalstat_drawtext.txt
" with the following content :

%{metadata:lavfi.signalstats.YAVG}



Then, we run this command :


ffplay VIDEO_INPUT -vf signalstats,drawtext=fontfile=FreeSerif.ttf:fontcolor=lime:textfile=signalstat_drawtext.txt



However, I haven't been able to find/guess how to use the
YAVG
output to control (in realtime) thegamma
value in theeq
filter… I think it's possible, I just don't know how to write the command.

Using FFPLAY, I would imagine something like this :


ffplay VIDEO_INPUT -vf signalstats,eq=gamma="1-(metadata:lavfi.signalstats.YAVG/50)+1":eval=frame



But of course, this doesn't work and I'm sure I'm just writing nonsense.