
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (105)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
MediaSPIP Player : les contrôles
26 mai 2010, parLes contrôles à la souris du lecteur
En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)
Sur d’autres sites (11075)
-
How to flatten a VR video to display in normal screen ? [closed]
14 mai 2023, par d-bI am not sure about the terminology here, but I have a VR-video that is intended to be shown using a headset with separate screens for each eye. It is not 3D in the sense that when you turn your head you will see something different, it is just "2,5D" so you get a sense of depth when looking at it. There are two video channels that are more or less identical, they are just recorded with slightly different angle, similar to how human eyes see the world. I hope this makes it clear what type of video I have, otherwise please ask for clarification in a comment (and if there is a special terminology for this type of video, please let me know).


More details : the original video is 4320x2160, basically 2 square channels at 2160 x 2160.


I want to show this video undistorted on a regular screen.


I have read the following questions here on SO :


- 

-
How to reproject and join these two clips with ffmpeg ?


-


-
How to de-warp 180 degree or 360 degree fisheye video with ffmpeg ?


-


-
Unwarping 180 VR Footage with FFmpeg v360 Filter














(and problably a few more).


I think I want to extract the two video channels (note that they are in the same video stream, not like in a movie where you can have several separate audio streams for different languages) into separate files and then "undistort" them.


(3) gave me a command to splitting the video into two files :


ffmpeg -i -myclip.mp4 -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" -map 0:a /tmp/left.mp4 -map "[right]" -map 0:a /tmp/right.mp4



That seemed to have worked as expected but then I also need to "undistort" the content because it was filmed with some fisheye lens or something like that (straight lines not in the absolute centre of the image are more or less circular).


(5) suggested this command :


ffmpeg -i left.mp4 -vf "v360=input=hequirect:output=flat:h_fov=100:v_fov=67.5:w=1280:h=720" leftfixed.mp4



but that produced an output that was 4320x2160 (obviously only from one channel, since input was just one channel) but just the centre of the original image, I estimate the content to be the 500x250 px (upscaled to 4320x2160, so very blocky) of the midpoint of the original image.


How can I "undistort" this video so it looks good on a 2D-screen while the size is preserved ?


-
-
How to stream H.264 bitstream to browser
21 janvier 2019, par BobtheMagicMooseThis is a followup to https://raspberrypi.stackexchange.com/questions/93254/stream-usb-webcam-with-audio?noredirect=1#comment150507_93254
I, like many other brave tinkerers before me, thought it would be a simple task to take an old USB camera (c920) can pair it with a raspberry pi to make a network streaming device (e.g., baby monitor). As those that have gone before me, I have now realized (after two days of tearing my hair out), that this is an extremely complicated task.
Problem statement : I have a raspberry pi zero and a c920 webcam. I want to use the H.264 bitstream from the webcam and serve it on the pi without transcoding it (the feeble processor would really struggle). I want to combine the video stream with its audio and send it over to a browser (phone, tablet, pc - something HTML5 without NAPI).
My current strategy is to do the following :
ffmpeg -re -f s16le -i /dev/zero -f v4l2 -thread_queue_size 512 -codec:v h264 -s 1920x1080 -i /dev/video0 -codec:v copy -acodec aac -ab 128k -g 50 http://localhost:8090/camera.ffm
(this is with dummy audio - I figured I would add audio later)Followed by
sudo ffserver -d -f /etc/ffserver.conf
to received the feed and broadcast it as a stream. This is theffserver.conf
file :`HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 100000
CustomLog -
<feed>
File /tmp/streamwebm.ffm
FileMaxSize 50M
ACL allow localhost
ACL allow 128.199.149.46
#ACL allow 127.0.0.1
ACL allow 192.168.0.0 192.168.0.255
</feed>
<stream stream="stream">
Format webm
# Video Settings
VideoFrameRate 30
VideoSize 1920x1080
# Audio settings
AudioCodec libvorbis
AudioSampleRate 48000
AVOptionAudio flags +global_header
MaxTime 0
AVOptionVideo me_range 16
AVOptionVideo qdiff 4
AVOptionVideo qmin 4
AVOptionVideo qmax 40
#AVOptionVideo good
AVOptionVideo flags +global_header
# Streaming settings
PreRoll 10
StartSendOnKey
Metadata author "author"
Metadata copyright "copyright"
Metadata title "Web app name"
Metadata comment "comment"
</stream>My basic html is
<video> <source src="http://localhost:8090/stream"> </source></video>
The stream however, doesn’t work (the browser won’t connect) and I get the following :
And the browser on the client says
(failed) NET::ERR_CONNECTION_REFUSED
Thoughts :
Begin stream simple mp4 with ffserver explains that ffserver can’t stream .mp4 because of headers or something. This is why I am using webm (which doesn’t support h.264 I believe and is causing the really slow performance converting to vp9). I’m not concerned about CPU usage at the moment, just want to get an image to appear on the browser !
-
I hear one issue deals with ’chunking’ - that the camera h.264 is a bitstream but h.264 streams for html5 should be chunked. Not sure how that would work.
-
I have tried VLC for some things (RTP) but haven’t have success.
-
Most resources (SE and other sites) are from 2010-2015 and it seems as thought v4l2 and other things have developed since then.
-
As my problem is most likely general ignorance of the subject matter, I would appreciate any answers that provide some general understanding as to the theory behind different techniques. I know this makes the question more of a call for opinion and less appropriate for SE, but I’m fixing to throw my computer out the window (you know the feeling).
Thank you !
-
-
expected end of line but found unknown token
17 juin 2017, par Denzil WilliamsOk so after days of searching, here I am. I am new to ffmpeg, applescript, and terminal.
I want to use ffmpeg to batch convert a group of selected files in any folder. I was successful in doing this by opening the terminal at the folder location and using this code :
for f in *.flv; do ffmpeg -i "$f" -acodec libmp3lame -b:a 256k "${f%.flv}.mp3" && rm "$f"; done
which finds all flv files, and converts it to 256 bit rate mp3, then deletes the original files.
Now I want it to be more automated, so I looked into creating a service. I tried running an apple script through automator, which I want it to open the terminal at the folder location the file then run the code to convert the files. Here’s the code I attempted :
tell application "Finder" to set currentFolder to target of front Finder window as text
set theWin to currentFolder's POSIX path
tell application "Terminal"
if not (exists window 1) then reopen
activate
do script "cd " & quoted form of theWin & ";clear" in window 1
tell application "Terminal"
do script "for f in *.flv; do ffmpeg -i "$f" -acodec libmp3lame -b:a 256k "${f%.flv}.mp3" && rm "$f"; done"
end tell
end tellThe first part of code opens up terminal at the folder location just fine. But when I add the part with the ffmpeg code it crashes. The error is apparently with the "$", those are what light up as the error, the error message says "Expected end of line, but found unknown token". Looking for some assistance please. I need the "$" because those are what make the loop work for renaming the files and such.