
Recherche avancée
Autres articles (40)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (4374)
-
Can you stream video with ffmpeg and opencv without long buffers
13 septembre 2019, par steveI’m successfully getting a video stream using ffmpeg, and displaying with opencv, but the video starts and stops very often and is not as smooth as when played on a browser. Is there a way to get the video stream to be as smooth as on a browser ?
I’ve tried playing with ffmpeg parameters as well as having multiple pipes to stitch together a single video.
import cv2
import numpy as np
import subprocess as sp
import time
# Playlist manifest for video from Nevada DOT traffic camera
VIDEO_URL = "https://wowza1.nvfast.org/bmw3/charleston_and_fremont_public.stream/playlist.m3u8"
width = 360
height = 240
pipe = sp.Popen([ 'ffmpeg', "-i", VIDEO_URL,
"-loglevel", "quiet", # no text output
"-an", # disable audio
"-f", "image2pipe",
"-pix_fmt", "bgr24",
"-r", "15", # FPS
"-hls_list_size", "3",
#"-hls_time", "8"
"-vcodec", "rawvideo", "-"],
stdin = sp.PIPE, stdout = sp.PIPE)
while True:
# Convert bytes to image
raw_image = pipe.stdout.read(width*height*3) # read 432*240*3 bytes (= 1 frame)
img = np.fromstring(raw_image, dtype='uint8').reshape((height,width,3))
raw_image = np.copy(img)
# Show image
cv2.imshow('pipe', img)
cv2.waitKey(1)
time.sleep(0.05) # This is to slow down the video so it plays more naturallyThe expected output is a opencv window that displays the video just like in https://cctv.nvfast.org/
I assume the problem lies in ffmpeg not getting the video chunks.
-
FFmpeg Time Lapse from Sources with Long Frozen Tail End
5 juillet 2019, par Rich_FMy source for inputs into
FFmpeg
is either oneAVI
file or aconcat
of many of them. Either way my resulting timelapse file has a long tail of frames that are a repeat of a single frame. It’s like a very long freeze frame is resulting on the end of my output file.I’m on an older
Mac Pro
so I can’t update myFFmpeg
. I have a laptop that has a newer version and I get the same there as well. I’m not sure if it’s because my source files areAVI
or not.ffmpeg version 4.1.1 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags=-I/System/Library/Frameworks/JavaVM.framework/Versions/Current/Headers/ --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
[avi @ 0x7fa82080c800] sample size (1) != block align (2)
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, avi, from 'concat:16460001.AVI|16460002.AVI|16460003.AVI|16460004.AVI|16460005.AVI|16460006.AVI|16460007.AVI|16460008.AVI|16460009.AVI|16460010.AVI|16460011.AVI|16460012.AVI|16460013.AVI|16460014.AVI|16460015.AVI|16460016.AVI|16460017.AVI|16460018.AVI|16460019.AVI|16460020.AVI|16460021.AVI':
Duration: 00:10:02.00, start: 0.000000, bitrate: 365923 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj420p(pc, bt470bg/unknown/unknown), 1280x720, 30 fps, 30 tbr, 30 tbn, 30 tbc
Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 32000 Hz, mono, s16, 512 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fa82082cc00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.1 Cache64
[libx264 @ 0x7fa82082cc00] profile High, level 3.1
[libx264 @ 0x7fa82082cc00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=16 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'out.mp4':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuvj420p(pc, progressive), 1280x720, q=-1--1, 16 fps, 16384 tbn, 16 tbc
Metadata:
encoder : Lavc58.35.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 32000 Hz, mono, fltp, 69 kb/s
Metadata:
encoder : Lavc58.35.100 aac
frame= 1962 fps=1.2 q=-1.0 Lsize= 136725kB time=03:24:07.00 bitrate= 91.5kbits/s dup=0 drop=365448 speed=7.41x
video:31548kB audio:103624kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.149437%
[libx264 @ 0x7fa82082cc00] frame I:10 Avg QP:18.74 size: 62176
[libx264 @ 0x7fa82082cc00] frame P:514 Avg QP:21.24 size: 30528
[libx264 @ 0x7fa82082cc00] frame B:1438 Avg QP:22.74 size: 11121
[libx264 @ 0x7fa82082cc00] consecutive B-frames: 1.4% 2.0% 2.0% 94.6%
[libx264 @ 0x7fa82082cc00] mb I I16..4: 2.1% 97.2% 0.7%
[libx264 @ 0x7fa82082cc00] mb P I16..4: 2.0% 33.3% 0.1% P16..4: 37.7% 12.0% 10.4% 0.0% 0.0% skip: 4.5%
[libx264 @ 0x7fa82082cc00] mb B I16..4: 0.7% 11.5% 0.0% B16..8: 29.1% 4.8% 1.3% direct:12.0% skip:40.6% L0:47.5% L1:42.9% BI: 9.6%
[libx264 @ 0x7fa82082cc00] 8x8 transform intra:94.3% inter:83.0%
[libx264 @ 0x7fa82082cc00] coded y,uvDC,uvAC intra: 73.4% 66.7% 8.5% inter: 25.9% 46.8% 2.2%
[libx264 @ 0x7fa82082cc00] i16 v,h,dc,p: 17% 33% 26% 24%
[libx264 @ 0x7fa82082cc00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 23% 51% 1% 2% 1% 4% 1% 2%
[libx264 @ 0x7fa82082cc00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 21% 19% 5% 8% 7% 13% 5% 5%
[libx264 @ 0x7fa82082cc00] i8c dc,h,v,p: 46% 29% 23% 2%
[libx264 @ 0x7fa82082cc00] Weighted P-Frames: Y:47.5% UV:19.3%
[libx264 @ 0x7fa82082cc00] ref P L0: 45.0% 13.8% 20.6% 15.7% 5.0%
[libx264 @ 0x7fa82082cc00] ref B L0: 61.1% 30.4% 8.5%
[libx264 @ 0x7fa82082cc00] ref B L1: 85.8% 14.2%
[libx264 @ 0x7fa82082cc00] kb/s:2107.54
[aac @ 0x7fa82081ea00] Qavg: 130.084I’ve read this thread and tried to follow it :
Create time lapse video from other video
Here are some trials I’ve attempted before, all with the same output :
# ffmpeg -y -i $mov -vf framestep=10,setpts=N/FRAME_RATE/TB,fps=2 -r 30 $out
# ffmpeg -y -i $mov -vf framestep=10,setpts=.01*PTS -r 30 $out
# ffmpeg -y -i $mov -vf framestep=10,setpts=.1*PTS -r 30 $out
# ffmpeg -y -i "concat:16460001.AVI|16460002.AVI|16460003.AVI|16460004.AVI|16460005.AVI|16460006.AVI|16460007.AVI|16460008.AVI|16460009.AVI|16460010.AVI|16460011.AVI|16460012.AVI|16460013.AVI|16460014.AVI|16460015.AVI|16460016.AVI|16460017.AVI|16460018.AVI|16460019.AVI|16460020.AVI|16460021.AVI" -vf framestep=10,setpts=.05*PTS -r 30 $out
ffmpeg -y -i "concat:16460001.AVI|16460002.AVI|16460003.AVI|16460004.AVI|16460005.AVI|16460006.AVI|16460007.AVI|16460008.AVI|16460009.AVI|16460010.AVI|16460011.AVI|16460012.AVI|16460013.AVI|16460014.AVI|16460015.AVI|16460016.AVI|16460017.AVI|16460018.AVI|16460019.AVI|16460020.AVI|16460021.AVI" -r 16 -filter:v "setpts=0.01*PTS" out.mp4Am I overlooking something ? I’m trying to speed up the inputs into a single file to quickly review incoming security footage. How can I do this without the super long useless tail at the end ?
-
ffmpeg hstack long list of files [closed]
16 décembre 2024, par Elle Fiorentino-LangeI have 1920 images that are 1920x1080 in resolution. I've sliced each image into 1x1080 slices. They are in folders labeled img0001_slices, img0002_slices, ... img1920_slices. In the folders the slices are labeled img0001_slice_0, img0001_slice_1, ... img0001_slice_1919. I want to use ffmpeg to form new images out of the slices. New image one should be the first slice of all the original images lined up horizontally from img1920 on the left to img0001 on the right. New image two should be the second slice of the original images in the same order, new image three should be the third, so on.


So far I've been trying to get a shell script to work where I first create a text file of all the different slices to be combined, i.e.


slice_0_inputs.txt


file 'img1920_slices/img1920_slice_0.png'
file 'img1919_slices/img1919_slice_0.png'
file 'img1918_slices/img1918_slice_0.png'
.
.
.
file 'img0001_slices/img0001_slice_0.png'



Then I run
ffmpeg -f concat -i slice_0_inputs.txt -filter_complex "hstack=inputs=1920" reconstructed_slice_0.png


But I keep getting the error


Cannot find a matching stream for unlabeled input pad hstack
Error binding filtergraph inputs/outputs: Invalid argument



I've tried reducing the file slice_0_inputs.txt to just 2 filepaths and running
ffmpeg -f concat -i slice_0_inputs.txt -filter_complex "[0:v][1:v]hstack=inputs=2" test.png


But I get the error


Invalid file index 1 in filtergraph description [0:v][1:v]hstack=inputs=2.
Error binding filtergraph inputs/outputs: Invalid argument



Could someone please help me understand the issue I am running into here ?