
Recherche avancée
Autres articles (83)
-
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (6842)
-
VLC syntax to transcode and stream to stdout ?
19 novembre 2016, par Will TowerGoal : I am trying to use VLC as a local server to expand the video capabilities of an app created with
Adobe AIR
,Flex
andActionscript
. I am usingVLC
to stream tostdout
and reading that output from within my app.VLC Streaming capabilities
VLC Flash Video
Stream VLC to Website with asf and FlashStatus : I am able to launch
VLC
as a background process and control it through its remote control interface (more detail). I can load, transcode and stream a local video file. The example app below is a barebones testbed demonstrating this.Issue : I am getting data in to my app but it is not rendering as video. I don’t know if it is a problem with my VLC commands or with writing to/reading from
stdout
. This technique of reading fromstdout
in AIR works (withffmpeg
for example).One of the various transcoding commands I have tried :
-I rc // remote control interface
-vvv // verbose debuging
--sout // transcode, stream to stdout
"#transcode{vcodec=FLV1}:std{access=file,mux=ffmpeg{mux=flv},dst=-}"This results in data coming into to my app but for some reason it is not rendering as video when using
appendBytes
with theNetStream
instance.If instead I write the data to an .flv file, a valid file is created – so the broken part seems to be writing it to
stdout
. One thing I have noticed : I am not getting metadata through the stdout`method. If I play the file created with the command below, I do see metadata.// writing to a file
var output:File = File.desktopDirectory.resolvePath("stream.flv");
var outputPath:String = output.nativePath;
"#transcode{vcodec=FLV1}:std{access=file,mux=ffmpeg{mux=flv},dst=" + outputPath + "}");
Hoping someone sees where I am going wrong here.
Update 1 : Just to add some more detail (!) – I took a look at the .flv file that is generated to examine the metadata. It appears at the head of the file as shown below. I have the correct
onMetaData
handler set up and see a trace of this data if I play the file from disk. I do not see this trace when reading fromstdout
andNetStream
is inData Generation
mode. Is it possible that it isn’t getting sent tostdout
for some reason ? I’ve tried generating my own header and appending that before the stream starts – I may not have the header format correct.
Update 2 : So in my
AIR
app I was able to crudely parse the incomingstdout
stream coming fromVLC
. I wanted to see if the FLV header data was being sent – and it appears that it is. I don’t know if it is in the correct format, etc. but as I mention above, if I write to an .flv file instead ofstdout
, a valid .flv file is created.Completely at a loss now – have tried everything I could think of and followed up every web link I could find on the issues involved. Alas – so close and it would have been so cool to leverage
VLC
from withinAIR
. -
slicing and seeking extremeley small sections of video in ffmpeg
20 février 2021, par Zarc RowdenI am writing a program that maps midi data to timestamps in a video. The end result is a kind of automatic generation of audio visuals for beat making or techno heads. The program takes in midi, slices a video into chunks based on the midi events and mappings and finally joins the slices into a video with 1:1 timing of monophonic midi notes to sections of a video.


When it is successful, the result is very cool and watching the video jump around and lock in to midi notes is very interesting


However, I am affraid that the ffmpeg commands I use are not giving exact results.


The code I feed to ffmpeg looks like this


EVENTS : left is midinote number, right is time from start of recording in which note occurs.


[{"note"=>"start", "timestamp"=>0.0},
 {"note"=>48, "timestamp"=>0.5700000037904829},
 {"note"=>51, "timestamp"=>383.7100000018836},
 {"note"=>45, "timestamp"=>884.3500000002678},
 {"note"=>48, "timestamp"=>999.0449999968405},
 {"note"=>51, "timestamp"=>1383.544999996957},
 {"note"=>45, "timestamp"=>1884.2599999989034},
 {"note"=>48, "timestamp"=>1998.890000002575},
 {"note"=>51, "timestamp"=>2383.4199999982957},
 {"note"=>45, "timestamp"=>2884.1000000029453},
 {"note"=>48, "timestamp"=>2998.7200000032317},
 {"note"=>51, "timestamp"=>3383.2800000018324},
 {"note"=>45, "timestamp"=>3883.894999999029},
 {"note"=>48, "timestamp"=>3998.6250000001746},
 {"note"=>51, "timestamp"=>4384.550000002491},
 {"note"=>45, "timestamp"=>4883.780000003753},
 {"note"=>48, "timestamp"=>4998.404999998456},
 {"note"=>51, "timestamp"=>5384.39500000095},
 {"note"=>45, "timestamp"=>5883.565000003728},
 {"note"=>48, "timestamp"=>5998.464999996941},
 {"note"=>51, "timestamp"=>6384.254999997211},
 {"note"=>45, "timestamp"=>6883.4550000028685},
 {"note"=>48, "timestamp"=>6998.585000001185},
 {"note"=>51, "timestamp"=>7384.055000002263},
 {"note"=>45, "timestamp"=>7883.249999998952},],



MAPPINGS : left side is midi note, right is timestamp in seconds


{
 48=>234.3489,
 45=>124.334489,
 51=>2789.34,
}



That Events are a sequential array of midi notes and time taken from recordings or standard midi file. The number is in milliseconds but I convert for ffmpeg before feeding the arguments.


The mappings are just in seconds and tell the program what to show when certain midi notes are encountered as we loop through the events and begin slicing the video.


The command I send to ffmpeg is constructed like this :


"ffmpeg -an -y -ss #{begin_at} -i #{project_tempfile_url} -t #{slice_duration} -c:v libx264 #{temp_url}"



When I concatenate these slices, they only look exact when my notes are very consistent like a kickdrum doing 4/4 rythms. Anything too fast or varied creates unpleasant results.


Is there a specific set of commands that will tell ffmpeg to cut down to the frame ? I think keyframe are not an ideal answer but not sure. I also think I can adjust by making sure that I only ever map the notes to keyframes, I can settle for it but it would be great if I could just cut almost anywhere between start and end like ANYWHERE like


rand(0...video.length)
 # and then have
 332.3253613134



But I may just be dreaming :P


Do you think that I would be better off writing a custom c program to cut frames like this ? I understand that frame rates could be an issue and that there may actually not be any data at 7.34667898999 seconds and that it might be here instead : 7.356788722342 and that ffmpeg probably searches for the nearest frame from whatever timestamp you input, but I feel like there must be a way to get good results still despite these limitations.


Thank you so much in advance for those who take the time to read this and understand this issue.


-
How do I download the image with metadata in Python ? [closed]
19 octobre 2024, par Temp AccountI am downloading some images in Python from the Airtable API and am trying to make a slideshow with them using ffmpeg. I download the images :


urllib2.urlretrieve(img['url'], "output/images/image_"+str(i)+".jpeg")



However, when I run the following ffmpeg command


ffmpeg -framerate 4/60 -i output/images/image_%d.jpeg output/out.mp4



I get the following error :


ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
 configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-l
inux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable
-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribi
di --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enab
le-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheo
ra --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libx
vid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --ena
ble-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enab
le-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
 libavutil 58. 29.100 / 58. 29.100
 libavcodec 60. 31.102 / 60. 31.102
 libavformat 60. 16.100 / 60. 16.100
 libavdevice 60. 3.100 / 60. 3.100
 libavfilter 9. 12.100 / 9. 12.100
 libswscale 7. 5.100 / 7. 5.100
 libswresample 4. 12.100 / 4. 12.100
 libpostproc 57. 3.100 / 57. 3.100
[mjpeg @ 0x593283e5e3c0] bits 150 is invalid
[mjpeg @ 0x593283e5e3c0] bits 28 is invalid
[image2 @ 0x593283e5d380] Could not find codec parameters for stream 0 (Video: mjpeg (Lossless), none(bt470bg/unknown/unknown), lossless): uns
pecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'output/images/image_%d.jpeg':
 Duration: 00:01:00.00, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: mjpeg (Lossless), none(bt470bg/unknown/unknown), lossless, 0.07 fps, 0.07 tbr, 0.07 tbn
File 'output/out.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
 Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (cf)
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (c8)
[mjpeg @ 0x593283e5f180] bits 150 is invalid
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] bits 28 is invalid
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (ce)
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (c6)
[mjpeg @ 0x593283e5f180] unable to decode APP fields: Invalid data found when processing input
 Last message repeated 1 times
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] unable to decode APP fields: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] invalid id 255
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Error while filtering: Invalid data found when processing input
[vist#0:0/mjpeg @ 0x593283e5f000] Decode error rate 1 exceeds maximum 0.666667
[out#0/mp4 @ 0x593283e603c0] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=N/A bitrate=N/A speed=N/A 
Conversion failed!




However, downloading the images in Chrome then creating the slideshow is successful. The images from Chrome have metadata of the filetype (JPEG), width and height. The images downloaded with Python have no metadata. How do I download that information so that my ffmpeg command will succeed ?


Thanks !