
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (65)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (9594)
-
Failed to hard-ware accelerate video decoding via Tesla P40
20 mai 2018, par PotemkinWhile I am writing a surveillance video recognition demo, I find it is much slower to simply play a video in the server(Xeon E5-2680 2.4GHz, Tesla P40) than in my laptop(i7-8550 1.8GHz, Intel UHD Graphics 620).
I use DXVA-Checker to see the video decoder device, and notice that my laptop use the GPU for decoding, but the server use no GPU for the job. Then I check ’nvidia-smi’, and it shows all of GPUs are running in TCC driver model, not WDDM driver model. And I cannot turn it to WDDM because nvidia-smi says it is not supported.
The video play demo is written via OPENCV, in which the video decoding part is from ffmpeg. The server runs in Windows-server-2012, and my laptop is on Windows 10.
The question is how can I get the server decoding videos with GPUs, and is this the reason for the slowness or there is something else ?
-
carrierwave-ffmpeg .weba audio convert to .mp3 or .mp4
14 mai 2018, par nmadougI use react-mic on the front-end of my React/Rails application to capture audio from the user. react-mic formats the audio blob as .weba. I’m trying to convert the .weba to .mp4 or .mp3 using carrierwave-ffmpeg . I can isolate the issue to my Rails uploader, which is as follows :
require 'fog/aws'
class UserResponseUploader < CarrierWave::Uploader::Base
include CarrierWave::FFmpeg
if Rails.env.test?
storage :file
else
storage :fog
end
version :mp3 do
puts `which ffmpeg`
# binding.pry
process encode: [:mp3, audio_codec: "a libmp3lame -qscale:a 2"]
# process :encode_audio=> [:mp4, audio_codec: "aac",:custom => "-strict experimental -qscale:a 2"]
def full_filename(for_file)
super.chomp(File.extname(super)) + '.mp3'
end
end
# Directory where uploaded files will be stored on file system or AWS
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
endffmpeg is installed on my local PC and I can run the command on the terminal to get an audio file created
ffmpeg -i audio.weba -codec:a libmp3lame -qscale:a 2 willwork.mp3
My problem is when I try and run everything in my code, I get the following error message :
NameError (undefined local variable or method `e' for #):
I don’t know where it’s complaining. I have a feeling the encoding line is causing the problem and I don’t have it set up properly.
Any ideas ?
-
ffmpeg RTSP error while decoding MB
2 mai 2018, par Hugh WI’m using ffmpeg to read an h264 RTSP stream from a Cisco 3050 IP camera and reencode it to disk as h264 (there are reasons why I’m not just using
-codec:copy
).The ffmpeg version is as follows :
ffmpeg version 3.2.6 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (Alpine 6.3.0)I’ve also tried with ffmpeg 2.8.14-0ubuntu0.16.04.1 and the latest ffmpeg built from source (I used this commit) and see the same behaviour as below.
The command I’m running is :
ffmpeg -rtsp_transport udp -i 'rtsp://<user>:<pw>@<ip>:554/StreamingSetting?version=1.0&action=getRTSPStream&ChannelID=1&ChannelName=Channel1' -r 10 -c:v h264 -crf 23 -x264-params keyint=60:min-keyint=60 -an -f ssegment -segment_time 60 -strftime 1 /output/%Y%m%d_%H%M%S.ts -abort_on empty_output
</ip></pw></user>I get a variety of errors at a fairly steady rate of at least one per second. Here’s a sample :
[rtsp @ 0x7f268c5e9220] max delay reached. need to consume packet
[rtsp @ 0x7f268c5e9220] RTP: missed 40 packets
[h264 @ 0x55b1e115d400] left block unavailable for requested intra mode
[h264 @ 0x55b1e115d400] error while decoding MB 0 12, bytestream 114567
[h264 @ 0x55b1e115d400] concealing 3889 DC, 3889 AC, 3889 MV errors in I frameThe most common one is ’error while decoding MB x x, bytestream x’. This corresponds to severe corruption in the video file when played back.
I see many references to that error message on stackoverflow and elsewhere, but I’ve yet to find a satisfying explanation or workaround. It comes from this line which appears to correspond to missing data at the end of the stream. ’left block unavailable’ comes from here and also looks like missing data.
Others have suggested using
-rtsp_transport tcp
instead (1, 2, 3) which in my case just gives a slightly different mix of errors, and still video corruption :[h264 @ 0x557923191b00] left block unavailable for requested intra4x4 mode -1
[h264 @ 0x557923191b00] error while decoding MB 0 28, bytestream 31068
[h264 @ 0x557923191b00] concealing 2609 DC, 2609 AC, 2609 MV errors in I frame
[rtsp @ 0x7f88e817b220] CSeq 5 expected, 0 received.Using Wireshark I confirmed that in both UDP and TCP mode, all of the packets are making it from the camera to the PC (sequential RTP sequence numbers without any missing) which makes me think the data is being lost after it arrives at ffmpeg.
I also see similar behaviour when running the same command against a Panasonic WV-SFV110 camera, but with less frequent errors overall. Switching from UDP to TCP on the Panasonic camera reduces but does not completely eliminate the errors/corruption.
I also tried a similar command with VLC and got similar errors (
cvlc rtsp://<user>:<pw>@<ip>/MediaInput/h264 :sout='#transcode{vcodec=h264}:std{access=file, mux=ts, dst="output.ts"}</ip></pw></user>
) — presumably the code hasn’t diverged much since libav forked from ffmpeg.The camera is plugged directly into a PoE port on the PC so network congestion can’t be a problem. Given that the PC has enough CPU to keep up encoding the live stream, it seems to me a problem with ffmpeg that it still drops data from the TCP stream.
Qualitatively, there are several factors which seem to make the problem worse :
- Higher video resolution
- Higher system load on the machine running ffmpeg (e.g. transcoding to a low res .avi file produces fewer errors than transcoding to h264 VBR ; using
-codec:copy
eliminates all errors except a couple while ffmpeg is starting up) - Greater motion within the camera view
What the does the error mean ? And what can I do about it ?