
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (73)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8147)
-
adaptive HTTP streaming for open codecs
14 octobre 2010, par silviaAt this week’s FOMS in New York we had one over-arching topic that seemed to be of interest to every single participant : how to do adaptive bitrate streaming over HTTP for open codecs. On the first day, there was a general discussion about the advantages and disadvantages of adaptive HTTP (...)
-
how do i modify this ffmpeg build script for minimal binary size output
13 janvier 2015, par BrianI’m trying to build the ffmpeg binaries for android on 3 chipsets. The output file size is too large to include in the project around 15mb.
https://github.com/falnatsheh/ffmpeg-android is the github project repo
the .sh build script for ffmpeg is like this
#!/bin/bash
. abi_settings.sh $1 $2 $3
pushd ffmpeg
case $1 in
armeabi-v7a | armeabi-v7a-neon)
CPU='cortex-a8'
;;
x86)
CPU='i686'
;;
esac
make clean
./configure \
--target-os="$TARGET_OS" \
--cross-prefix="$CROSS_PREFIX" \
--arch="$NDK_ABI" \
--cpu="$CPU" \
--enable-runtime-cpudetect \
--sysroot="$NDK_SYSROOT" \
--enable-pic \
--enable-libx264 \
--enable-pthreads \
--disable-debug \
--disable-ffserver \
--enable-version3 \
--enable-hardcoded-tables \
--disable-ffplay \
--disable-ffprobe \
--enable-gpl \
--enable-yasm \
--disable-doc \
--disable-shared \
--enable-static \
--pkg-config="${2}/ffmpeg-pkg-config" \
--prefix="${2}/build/${1}" \
--extra-cflags="-I${TOOLCHAIN_PREFIX}/include $CFLAGS" \
--extra-ldflags="-L${TOOLCHAIN_PREFIX}/lib $LDFLAGS" \
--extra-libs="-lm" \
--extra-cxxflags="$CXX_FLAGS" || exit 1
make -j${NUMBER_OF_CORES} && make install || exit 1
popdI tried adding —disable-everything as the first line in configure but then the compiler complains that I didnt set a target-os even though its the next line
In the app I only use ffmpeg to take input mp4 videos and transpose and rotate them
here are the two commands-y -i %s -vf transpose=%d -tune film -metadata:s:v rotate=0 -c:v libx264 -preset ultrafast -crf 27 -c:a copy -bsf:a aac_adtstoasc %s
where %s is a file path
and then concat files
-y -i concat:%s -preset ultrafast -crf 27 -c:v copy -c:a copy -bsf:a aac_adtstoasc %s
If someone can help me with the build script that would be awesome
-
How to retrieve, process and display frames from a capture device with minimal latency
14 mars 2024, par valleI'm currently working on a project where I need to retrieve frames from a capture device, process them, and display them with minimal latency and compression. Initially, my goal is to maintain the video stream as close to the source signal as possible, ensuring no noticeable compression or latency. However, as the project progresses, I also want to adjust framerate and apply image compression.


I have experimented using FFmpeg, since that was the first thing that came to my mind when thinking about capturing video(frames) and processing them.


However I am not satisfied yet, since I am experiencing delay in the stream. (No huge delay but definately noticable)
The command that worked best so far for me :


ffmpeg -rtbufsize 512M -f dshow -i video="Blackmagic WDM Capture (4)" -vf format=yuv420p -c:v libx264 -preset ultrafast -qp 0 -an -tune zerolatency -f h264 - | ffplay -fflags nobuffer -flags low_delay -probesize 32 -sync ext -


I also used OBS to capture the video stream from the capture device and when looking into the preview there was no noticable delay. I then tried to simulate the exact same settings using ffmpeg :


ffmpeg -rtbufsize 512M -f dshow -i video="Blackmagic WDM Capture (4)" -vf format=yuv420p -r 60 -c:v libx264 -preset veryfast -b:v 2500K -an -tune zerolatency -f h264 - | ffplay -fflags nobuffer -flags low_delay -probesize 32 -sync ext -


But the delay was kind of similar to the one of the command above.
I know that OBS probably has a lot complexer stuff going on (Hardware optimization etc.) but atleast I know this way that it´s somehow possible to display the stream from the capture device without any noticable latency (On my setup).


The approach that so far worked best for me (In terms of delay) was to use Python and OpenCV to read frames of the capture device and display them. I also implemented my own framerate (Not perfect I know) but when it comes to compression I am rather limited compared to FFmpeg and the frame processing is also too slow when reaching framerates about 20 fps and more.


import cv2
import time

# Set desired parameters
FRAME_RATE = 15 # Framerate in frames per second
COMPRESSION_QUALITY = 25 # Compression quality for JPEG format (0-100)
COMPRESSION_FLAG = True # Enable / Disable compression

# Set capture device index (replace 0 with the index of your capture card)
cap = cv2.VideoCapture(4, cv2.CAP_DSHOW)

# Check if the capture device is opened successfully
if not cap.isOpened():
 print("Error: Could not open capture device")
 exit()

# Create an OpenCV window
# TODO: The window is scaled to fullscreen here (The source video is 1920x1080, the display is 1920x1200)
# I don´t know the scaling algorithm behind this, but it seems to be a simple stretch / nearest neighbor
cv2.namedWindow('Frame', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Frame', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

# Loop to capture and display frames
while True:
 # Start timer for each frame processing cycle
 start_time = time.time()

 # Capture frame-by-frame
 ret, frame = cap.read()

 # If frame is read correctly, proceed
 if ret:
 if COMPRESSION_FLAG:
 # Perform compression
 _, compressed_frame = cv2.imencode('.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), COMPRESSION_QUALITY])
 # Decode the compressed frame
 frame = cv2.imdecode(compressed_frame, cv2.IMREAD_COLOR)

 # Display the frame
 cv2.imshow('Frame', frame)

 # Calculate elapsed time since the start of this frame processing cycle
 elapsed_time = time.time() - start_time

 # Calculate available time for next frame
 available_time = 1.0 / FRAME_RATE

 # Check if processing time exceeds available time
 if elapsed_time > available_time:
 print("Warning: Frame processing time exceeds available time.")

 # Calculate time to sleep to achieve desired frame rate -> maintain a consistent frame rate
 sleep_time = 1.0 / FRAME_RATE - elapsed_time

 # If sleep time is positive, sleep to control frame rate
 if sleep_time > 0:
 time.sleep(sleep_time)

 # Break the loop if 'q' is pressed
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

# Release the capture object and close the display window
cap.release()
cv2.destroyAllWindows()



I also thought about getting the SDK of the capture device in order to upgrade the my performance.
But Since I am not used to low level programming but rather to scripting languages, I thought I would reach out to the StackOverflow community at first, and see if anybody has some hints to better approaches or any tips how I could increase my performance.


Any Help is appreciated !