
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (25)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (7165)
-
How to compose (encoded) pixels into videos / live-streams in Flutter ?
18 octobre 2022, par RyuujoI am trying to make an OBS-like app using Flutter.


I was trying to use Flutter engine to draw widgets onto the video
frames, along with screen, with separated layers.


I came up with a bad way, which :


- 

- use
RenderRepaintBoundary
to get images of specific area. - use
ffmpeg
to compose these.png
series into video withh.264
encoding. - (then maybe use
.mp4
files to publish as video stream ??)








, which is baaad in real-time performance and efficiency apparently.


(relevant code)


// some_page.dart

int index = 0;

 Future<void> onTestPressed() async {
 int i = 0;
 while (i++ < 600) {
 try {
 var render = repaintKey.currentContext!.findRenderObject()
 as RenderRepaintBoundary;
 double dpr = window.devicePixelRatio;
 var byteData = await render
 .toImage(pixelRatio: dpr)
 .then((image) => image.toByteData(format: ImageByteFormat.png));

 var tempDir = await getTemporaryDirectory();
 var fileName = '${tempDir.path}/frame_${index++}';

 var bytes = byteData!.buffer.asUint8List();
 var file = File(fileName);
 if (!file.existsSync()) {
 file.createSync();
 }

 await file.writeAsBytes(bytes);
 // OpenFile.open(fileName);
 } catch (e) {
 if (kDebugMode) {
 print(e);
 }
 }
 }
 }
</void>


🌟 I know that Flutter uses
Skia
as its graphic engine, and could I useSkia
ability (by drawing widgets) somehow so as to produce video frames more directly ?

Thank you.


- use
-
Cross Fade Arbitrary Number of Videos ffmpeg Efficiently
15 avril 2022, par jippyjoe4I have a series of videos named 'cut_xxx.mp4' where xxx represents a number 000 through 999. I want to do a cross fade on an arbitrary number of them to create a compilation, and each fade should last 4 seconds long. Currently, I'm doing this with Python, but I suspect this is not the most efficient way :


import subprocess 
def get_length(filename):
 result = subprocess.run(["ffprobe", "-v", "error", "-show_entries",
 "format=duration", "-of",
 "default=noprint_wrappers=1:nokey=1", filename],
 stdout=subprocess.PIPE,
 stderr=subprocess.STDOUT)
 return float(result.stdout)

CROSS_FADE_DURATION = 4

basevideo = 'cut_000.mp4'
for ii in range(total_videos - 1):
 fade_start = math.floor(get_length(basevideo) - CROSS_FADE_DURATION) # new one
 outfile = f'cross_fade_{ii}.mp4'
 append_video = f'cut_{str(ii+1).zfill(3)}.mp4'
 cfcmd = f'ffmpeg -y -i {basevideo} -i {append_video} -filter_complex "xfade=offset={fade_start}:duration={CROSS_FADE_DURATION}" -an {outfile}'
 basevideo = outfile
 subprocess.call(cfcmd)
 print(fade_start)



I specifically remove the audio with
-an
because I'll add an audio track later. The issue I see here is that I'm compressing the video over and over again with each individual video file I add to the compilation because I'm only adding one video at a time and then re-encoding.

There should be a way to cross fade multiple videos together into a compilation, but I'm not sure what this would look like or how I would get it to work for an arbitrary number of video files of different durations. Any idea on what that monolithic ffmppeg command would look like or how I could automatically generate it given a list of videos and their durations ?


-
Artifacts in ffmpeg fifo, low fps, stream ends
11 août 2020, par Ben GardnerI'm using a Raspberry Pi 3B and 4 for this, neither works.


I'm trying to both pass my capture card's input (
/dev/video0
) through a fifo file so I can play it on the screen via omxplayer (1080p/30fps), and also grab frames of/dev/video0
out to a series of jpgs (1920x1080 right now, but I'd like it to be 640x480) so I can do analysis on it while it's being played. The input to the capture card is television via HDMI.

This is the command I use to make the stream go to the fifo and jpgs.


ffmpeg -y -f v4l2 -input_format mjpeg -framerate 30 -video_size 1920x1080 \
-thread_queue_size 16384 -i /dev/video0 -f alsa -ac 1 \
-thread_queue_size 16384 -i hw:CARD=U0x534d0x2109,DEV=0 \
-c:v copy -b:v 32000k -preset faster -x264opts keyint=50 \
-g 25 -pix_fmt yuvj422p -c:a aac -b:a 128k -codec:v copy -f tee \
-map 0:v -map 1:a "fifo.mkv|output_%3d.jpg"



Here is my output, which gives me 30fps originally, sometimes dipping into 29-28 fps, and then skipping (both audio and video) and artifacts in the video after around 5-10 minutes with the severity eventually increasing until it stops :


[mjpeg @ 0x1504490] EOI missing, emulating
Input #0, video4linux2,v4l2, from '/dev/video0':
 Duration: N/A, start: 27151.039849, bitrate: N/A
 Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Guessed Channel Layout for Input Stream #1.0 : mono
Input #1, alsa, from 'hw:CARD=U0x534d0x2109,DEV=0':
 Duration: N/A, start: 1596773777.825328, bitrate: 1536 kb/s
 Stream #1:0: Audio: pcm_s16le, 96000 Hz, mono, s16, 1536 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (copy)
 Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
Output #0, tee, to 'fifo.mkv|output_%3d.jpg':
 Metadata:
 encoder : Lavf58.20.100
 Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, q=2-31, 32000 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
 Stream #0:1: Audio: aac (LC), 96000 Hz, mono, fltp, 128 kb/s
 Metadata:
 encoder : Lavc58.35.100 aac
[alsa @ 0x1507300] Thread message queue blocking; consider raising the thread_queue_size option (current value: 16384)
[alsa @ 0x1507300] ALSA buffer xrun.time=00:13:55.89 bitrate=N/A speed=0.972x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:16.02 bitrate=N/A speed=0.974x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:27.25 bitrate=N/A speed=0.972x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:33.35 bitrate=N/A speed=0.97x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:44.71 bitrate=N/A speed=0.969x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:49.71 bitrate=N/A speed=0.97x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:01.51 bitrate=N/A speed=0.967x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:07.78 bitrate=N/A speed=0.969x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:14.27 bitrate=N/A speed=0.962x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:26.44 bitrate=N/A speed=0.96x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:32.40 bitrate=N/A speed=0.96x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:38.63 bitrate=N/A speed=0.963x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:45.60 bitrate=N/A speed=0.959x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:50.93 bitrate=N/A speed=0.957x
 Last message repeated 1 times
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:00.79 bitrate=N/A speed=0.951x
 Last message repeated 1 times
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:05.29 bitrate=N/A speed=0.949x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:07.19 bitrate=N/A speed=0.949x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:11.72 bitrate=N/A speed=0.945x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:16.02 bitrate=N/A speed=0.944x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:26.39 bitrate=N/A speed=0.953x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:30.22 bitrate=N/A speed=0.938x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:30.98 bitrate=N/A speed=0.937x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:37.66 bitrate=N/A speed=0.941x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:41.28 bitrate=N/A speed=0.935x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:42.95 bitrate=N/A speed=0.934x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:45.82 bitrate=N/A speed=0.933x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:46.76 bitrate=N/A speed=0.932x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:47.05 bitrate=N/A speed=0.931x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:50.74 bitrate=N/A speed=0.927x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:57.05 bitrate=N/A speed=0.927x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:58.91 bitrate=N/A speed=0.927x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:17:09.41 bitrate=N/A speed=0.92x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:17:12.04 bitrate=N/A speed=0.917x
[alsa @ 0x1507300] ALSA buffer xrun.time=00:17:12.61 bitrate=N/A speed=0.916x
Killed31579 fps= 28 q=-1.0 size=N/A time=00:17:32.56 bitrate=N/A speed=0.919x



I also occasionally get this error :


[tee @ 0x17001c0] Non-monotonous DTS in output stream 0:1; previous: 99251754, current: 99247503; changing to 99251755. This may result in incorrect timestamps in the output file.



I'm assuming this has something to do with the audio getting routed to the jpg. I've tried
[select=\'v\']
in front of the jpg, which doesn't change the behavior as well as[map=\'1\:a\']
in front of the mkv, which says[matroska @ 0xe446f0] Unknown option 'map'
.

I should also disclaim that I don't have much of an idea of what this command is doing compression-wise, I basically just copy/pasted that part.


What edits do I need to make to get this into a fifo and series of jpgs at the same time ?