
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (110)
-
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (10316)
-
How to write a Live555 FramedSource to allow me to stream H.264 live
28 septembre 2019, par GarvielI’ve been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.
What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I’ve made it a NAL packet which I update each frame.
I’m quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like
memcpy(fTo, nal->p_payload, nal->i_payload)
I assume that the payload is my frame data in bytes ? If anybody has an example of a class they derived from FramedSource that might at least be close to what I’m trying to do I would love to see it, this is all new to me and a little tricky to figure out what’s happening. Live555’s documentation is pretty much the code itself which doesn’t exactly make it easy for me to figure out.
-
ffmpeg - Set segment's pkt_pts_time for each segment
8 mai 2023, par Luis Lobo BorobiaWe are using the following script within Node.js to create mp4 files via segments. I've omited the actual length we use (1 minute per video) for testing purposes :


/usr/bin/ffmpeg -progress pipe:5 -use_wallclock_as_timestamps 1 \
-analyzeduration 1000000 -probesize 1000000 -fflags +igndts \
-rtsp_transport tcp \
-i rtsp://user:pwd@192.168.1.101:554/path/to/stream -strict -2 -an \
-c:v copy -strict -2 -movflags +faststart -f segment -segment_atclocktime 1 \
-reset_timestamps 1 -strftime 1 \
-initial_offset 1683326120 \
test-%Y-%m-%dT%H-%M-%S.mp4



With this command, we can force the "first" segment to have a
pkt_pts_time
that is "close" to the time the video was recorded.

Ideally, we would love to get the recording date/time from the camera but it doesn't seem to be a value we can count on.


The problem that the previous command have is that it uses the same
initial_offset
for all the segments. If I useoutput_ts_offset
, then, only the first segment has the updated timestamp, but the next ones start with 0.

Is there a way to make each segment "start where the previous ended" ?


-
Is it possible if I don't want to mention exact name of input file when doing any ffmpeg conversion
22 septembre 2018, par MPSI have been using the following command to create a video from a still image and an audio file and it’s awesome. I love it very much.
ffmpeg -framerate 5 -loop 1 -i image.jpg -i audio.mp3 -c:v libx264 -c:a copy -shortest out.mp4
However, I still think it would be good if I don’t have to write a command to match the exact file name of either the picture or audio here.
What I do every day is I have to copy one image file and an audio file into my folder. Then I have to rename each image and audio file to something short like the ones above so it’s easy for me to write the command line like that.
But what if I have an image file name that is complicated like 8oerlujsfljsdl.jpg and I just want to start converting it right away without renaming it or have to write this full long name in command line ?
Is there such a thing as
-i *.jpg
which means any image file in this folder (I assure I will have only one image and one audio in the folder) will be used as input file and convert.Sorry for such a long question which could be unnecessary but I just don’t know how to explain my issue in short. I hope it could be understood.