Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (77)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (5988)

  • How to compose (encoded) pixels into videos / live-streams in Flutter ?

    18 octobre 2022, par Ryuujo

    I am trying to make an OBS-like app using Flutter.

    


    I was trying to use Flutter engine to draw widgets onto the video
frames, along with screen, with separated layers.

    


    I came up with a bad way, which :

    


      

    • use RenderRepaintBoundary to get images of specific area.
    • 


    • use ffmpeg to compose these .png series into video with h.264 encoding.
    • 


    • (then maybe use .mp4 files to publish as video stream ??)
    • 


    


    , which is baaad in real-time performance and efficiency apparently.

    


    (relevant code)

    


    // some_page.dart&#xA;&#xA;int index = 0;&#xA;&#xA;  Future<void> onTestPressed() async {&#xA;    int i = 0;&#xA;    while (i&#x2B;&#x2B; &lt; 600) {&#xA;      try {&#xA;        var render = repaintKey.currentContext!.findRenderObject()&#xA;            as RenderRepaintBoundary;&#xA;        double dpr = window.devicePixelRatio;&#xA;        var byteData = await render&#xA;            .toImage(pixelRatio: dpr)&#xA;            .then((image) => image.toByteData(format: ImageByteFormat.png));&#xA;&#xA;        var tempDir = await getTemporaryDirectory();&#xA;        var fileName = &#x27;${tempDir.path}/frame_${index&#x2B;&#x2B;}&#x27;;&#xA;&#xA;        var bytes = byteData!.buffer.asUint8List();&#xA;        var file = File(fileName);&#xA;        if (!file.existsSync()) {&#xA;          file.createSync();&#xA;        }&#xA;&#xA;        await file.writeAsBytes(bytes);&#xA;        // OpenFile.open(fileName);&#xA;      } catch (e) {&#xA;        if (kDebugMode) {&#xA;          print(e);&#xA;        }&#xA;      }&#xA;    }&#xA;  }&#xA;</void>

    &#xA;

    🌟 I know that Flutter uses Skia as its graphic engine, and could I use Skia ability (by drawing widgets) somehow so as to produce video frames more directly ?

    &#xA;

    Thank you.

    &#xA;

  • Cross Fade Arbitrary Number of Videos ffmpeg Efficiently

    15 avril 2022, par jippyjoe4

    I have a series of videos named 'cut_xxx.mp4' where xxx represents a number 000 through 999. I want to do a cross fade on an arbitrary number of them to create a compilation, and each fade should last 4 seconds long. Currently, I'm doing this with Python, but I suspect this is not the most efficient way :

    &#xA;

    import subprocess    &#xA;def get_length(filename):&#xA;  result = subprocess.run(["ffprobe", "-v", "error", "-show_entries",&#xA;                          "format=duration", "-of",&#xA;                          "default=noprint_wrappers=1:nokey=1", filename],&#xA;    stdout=subprocess.PIPE,&#xA;    stderr=subprocess.STDOUT)&#xA;  return float(result.stdout)&#xA;&#xA;CROSS_FADE_DURATION = 4&#xA;&#xA;basevideo = &#x27;cut_000.mp4&#x27;&#xA;for ii in range(total_videos - 1):&#xA;  fade_start = math.floor(get_length(basevideo) - CROSS_FADE_DURATION) # new one&#xA;  outfile = f&#x27;cross_fade_{ii}.mp4&#x27;&#xA;  append_video = f&#x27;cut_{str(ii&#x2B;1).zfill(3)}.mp4&#x27;&#xA;  cfcmd = f&#x27;ffmpeg -y -i {basevideo} -i {append_video} -filter_complex "xfade=offset={fade_start}:duration={CROSS_FADE_DURATION}" -an {outfile}&#x27;&#xA;  basevideo = outfile&#xA;  subprocess.call(cfcmd)&#xA;  print(fade_start)&#xA;

    &#xA;

    I specifically remove the audio with -an because I'll add an audio track later. The issue I see here is that I'm compressing the video over and over again with each individual video file I add to the compilation because I'm only adding one video at a time and then re-encoding.

    &#xA;

    There should be a way to cross fade multiple videos together into a compilation, but I'm not sure what this would look like or how I would get it to work for an arbitrary number of video files of different durations. Any idea on what that monolithic ffmppeg command would look like or how I could automatically generate it given a list of videos and their durations ?

    &#xA;

  • Artifacts in ffmpeg fifo, low fps, stream ends

    11 août 2020, par Ben Gardner

    I'm using a Raspberry Pi 3B and 4 for this, neither works.

    &#xA;

    I'm trying to both pass my capture card's input (/dev/video0) through a fifo file so I can play it on the screen via omxplayer (1080p/30fps), and also grab frames of /dev/video0 out to a series of jpgs (1920x1080 right now, but I'd like it to be 640x480) so I can do analysis on it while it's being played. The input to the capture card is television via HDMI.

    &#xA;

    This is the command I use to make the stream go to the fifo and jpgs.

    &#xA;

    ffmpeg -y -f v4l2 -input_format mjpeg -framerate 30 -video_size 1920x1080 \&#xA;-thread_queue_size 16384 -i /dev/video0 -f alsa -ac 1 \&#xA;-thread_queue_size 16384 -i hw:CARD=U0x534d0x2109,DEV=0 \&#xA;-c:v copy -b:v 32000k -preset faster -x264opts keyint=50 \&#xA;-g 25 -pix_fmt yuvj422p -c:a aac -b:a 128k -codec:v copy -f tee \&#xA;-map 0:v -map 1:a "fifo.mkv|output_%3d.jpg"&#xA;

    &#xA;

    Here is my output, which gives me 30fps originally, sometimes dipping into 29-28 fps, and then skipping (both audio and video) and artifacts in the video after around 5-10 minutes with the severity eventually increasing until it stops :

    &#xA;

    [mjpeg @ 0x1504490] EOI missing, emulating&#xA;Input #0, video4linux2,v4l2, from &#x27;/dev/video0&#x27;:&#xA;  Duration: N/A, start: 27151.039849, bitrate: N/A&#xA;    Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 1000k tbc&#xA;Guessed Channel Layout for Input Stream #1.0 : mono&#xA;Input #1, alsa, from &#x27;hw:CARD=U0x534d0x2109,DEV=0&#x27;:&#xA;  Duration: N/A, start: 1596773777.825328, bitrate: 1536 kb/s&#xA;    Stream #1:0: Audio: pcm_s16le, 96000 Hz, mono, s16, 1536 kb/s&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;  Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))&#xA;Press [q] to stop, [?] for help&#xA;Output #0, tee, to &#x27;fifo.mkv|output_%3d.jpg&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.20.100&#xA;    Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, q=2-31, 32000 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc&#xA;    Stream #0:1: Audio: aac (LC), 96000 Hz, mono, fltp, 128 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.35.100 aac&#xA;[alsa @ 0x1507300] Thread message queue blocking; consider raising the thread_queue_size option (current value: 16384)&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:13:55.89 bitrate=N/A speed=0.972x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:16.02 bitrate=N/A speed=0.974x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:27.25 bitrate=N/A speed=0.972x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:33.35 bitrate=N/A speed=0.97x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:44.71 bitrate=N/A speed=0.969x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:14:49.71 bitrate=N/A speed=0.97x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:01.51 bitrate=N/A speed=0.967x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:07.78 bitrate=N/A speed=0.969x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:14.27 bitrate=N/A speed=0.962x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:26.44 bitrate=N/A speed=0.96x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:32.40 bitrate=N/A speed=0.96x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:38.63 bitrate=N/A speed=0.963x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:45.60 bitrate=N/A speed=0.959x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:15:50.93 bitrate=N/A speed=0.957x&#xA;    Last message repeated 1 times&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:00.79 bitrate=N/A speed=0.951x&#xA;    Last message repeated 1 times&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:05.29 bitrate=N/A speed=0.949x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:07.19 bitrate=N/A speed=0.949x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:11.72 bitrate=N/A speed=0.945x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:16.02 bitrate=N/A speed=0.944x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:26.39 bitrate=N/A speed=0.953x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:30.22 bitrate=N/A speed=0.938x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:30.98 bitrate=N/A speed=0.937x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:37.66 bitrate=N/A speed=0.941x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:41.28 bitrate=N/A speed=0.935x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:42.95 bitrate=N/A speed=0.934x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:45.82 bitrate=N/A speed=0.933x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:46.76 bitrate=N/A speed=0.932x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:47.05 bitrate=N/A speed=0.931x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:50.74 bitrate=N/A speed=0.927x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:57.05 bitrate=N/A speed=0.927x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:16:58.91 bitrate=N/A speed=0.927x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:17:09.41 bitrate=N/A speed=0.92x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:17:12.04 bitrate=N/A speed=0.917x&#xA;[alsa @ 0x1507300] ALSA buffer xrun.time=00:17:12.61 bitrate=N/A speed=0.916x&#xA;Killed31579 fps= 28 q=-1.0 size=N/A time=00:17:32.56 bitrate=N/A speed=0.919x&#xA;

    &#xA;

    I also occasionally get this error :

    &#xA;

    [tee @ 0x17001c0] Non-monotonous DTS in output stream 0:1; previous: 99251754, current: 99247503; changing to 99251755. This may result in incorrect timestamps in the output file.&#xA;

    &#xA;

    I'm assuming this has something to do with the audio getting routed to the jpg. I've tried [select=\&#x27;v\&#x27;] in front of the jpg, which doesn't change the behavior as well as [map=\&#x27;1\:a\&#x27;] in front of the mkv, which says [matroska @ 0xe446f0] Unknown option &#x27;map&#x27;.

    &#xA;

    I should also disclaim that I don't have much of an idea of what this command is doing compression-wise, I basically just copy/pasted that part.

    &#xA;

    What edits do I need to make to get this into a fifo and series of jpgs at the same time ?

    &#xA;