
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (60)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (9928)
-
What could be the cause of these http-livestream artefacts in google chrome ?
21 avril 2021, par NGauthierHere is the http-livestream setup : The server is running ffmpeg with the DASH protocol and h264 encoding. The client is using Dash.js. Resolution is fixed to 1920x1080, with 24 bit depth, and 60hz.


The artefacting (image below) is only present when the last row of the video is within chrome viewport (so it disapears if the page is scrolled up). It manifests itself as stretching of the center row of pixels downwards, and appears to only affect some color channels.


I have attempted changing the bitrate, and cutting the last row from the source, thinking the issue could be on the server side, without any impact. The fact that the issue depends on the position in the viewport makes me suspect a glitch in chrome itself.


I have also attempted to force hardware decoding off in chrome :\flags and it does not solve the issue.


Please submit your hypothesis on what could be the cause of this issue. Thanks.





Update #1


Here is the ffmpeg command line and logs :


export DISPLAY=:0 && ffmpeg -f x11grab -framerate 60 -video_size 1920x1080 -i :0.0+0,0 -draw_mouse 0 -f dash -utc_timing_url https://time.akamai.com/?iso -streaming 1 -seg_duration 2 -frag_duration 0.033 -fflags nobuffer -fflags flush_packets -c:v h264 -preset ultrafast data/stream.mpd



And the logs :


ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
 configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[x11grab @ 0x561ca34b9980] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, x11grab, from ':0.0+0,0':
 Duration: N/A, start: 1618941693.853256, bitrate: N/A
 Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 60 fps, 1000k tbr, 1000k tbn, 1000k tbc
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x561ca34c5300] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
[libx264 @ 0x561ca34c5300] profile High 4:4:4 Predictive, level 4.2, 4:4:4 8-bit
[libx264 @ 0x561ca34c5300] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=6 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
[dash @ 0x561ca34c3740] No bit rate set for stream 0
[dash @ 0x561ca34c3740] Opening 'data/init-stream0.m4s' for writing
Output #0, dash, to 'data/stream.mpd':
 Metadata:
 encoder : Lavf58.29.100
 Stream #0:0: Video: h264 (libx264), yuv444p, 1920x1080, q=-1--1, 60 fps, 15360 tbn, 60 tbc
 Metadata:
 encoder : Lavc58.54.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
[dash @ 0x561ca34c3740] Opening 'data/chunk-stream0-00001.m4s.tmp' for writing
frame= 34 fps=0.0 q=15.0 size=N/A time=00:00:00.43 bitrate=N/A dup=5 drop=0 speed=0.836x 
frame= 65 fps= 64 q=15.0 size=N/A time=00:00:00.95 bitrate=N/A dup=5 drop=0 speed=0.929x 
frame= 96 fps= 62 q=15.0 size=N/A time=00:00:01.46 bitrate=N/A dup=5 drop=2 speed=0.955x 
frame= 126 fps= 62 q=15.0 size=N/A time=00:00:01.96 bitrate=N/A dup=5 drop=3 speed=0.962x 
frame= 157 fps= 62 q=15.0 size=N/A time=00:00:02.48 bitrate=N/A dup=5 drop=3 speed=0.973x 
frame= 188 fps= 61 q=15.0 size=N/A time=00:00:03.00 bitrate=N/A dup=5 drop=3 speed=0.98x 
frame= 217 fps= 61 q=15.0 size=N/A time=00:00:03.48 bitrate=N/A dup=5 drop=3 speed=0.977x 
frame= 247 fps= 61 q=15.0 size=N/A time=00:00:03.98 bitrate=N/A dup=6 drop=3 speed=0.976x 
[dash @ 0x561ca34c3740] Opening 'data/stream.mpd.tmp' for writing
[dash @ 0x561ca34c3740] Opening 'data/chunk-stream0-00002.m4s.tmp' for writing
frame= 279 fps= 61 q=15.0 size=N/A t



-
Google Speech API + Go - Transcribing Audio Stream of Unknown Length
14 février 2018, par JoshI have an rtmp stream of a video call and I want to transcribe it. I have created 2 services in Go and I’m getting results but it’s not very accurate and a lot of data seems to get lost.
Let me explain.
I have a
transcode
service, I use ffmpeg to transcode the video to Linear16 audio and place the output bytes onto a PubSub queue for atranscribe
service to handle. Obviously there is a limit to the size of the PubSub message, and I want to start transcribing before the end of the video call. So, I chunk the transcoded data into 3 second clips (not fixed length, just seems about right) and put them onto the queue.The data is transcoded quite simply :
var stdout Buffer
cmd := exec.Command("ffmpeg", "-i", url, "-f", "s16le", "-acodec", "pcm_s16le", "-ar", "16000", "-ac", "1", "-")
cmd.Stdout = &stdout
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
ticker := time.NewTicker(3 * time.Second)
for {
select {
case <-ticker.C:
bytesConverted := stdout.Len()
log.Infof("Converted %d bytes", bytesConverted)
// Send the data we converted, even if there are no bytes.
topic.Publish(ctx, &pubsub.Message{
Data: stdout.Bytes(),
})
stdout.Reset()
}
}The
transcribe
service pulls messages from the queue at a rate of 1 every 3 seconds, helping to process the audio data at about the same rate as it’s being created. There are limits on the Speech API stream, it can’t be longer than 60 seconds so I stop the old stream and start a new one every 30 seconds so we never hit the limit, no matter how long the video call lasts for.This is how I’m transcribing it :
stream := prepareNewStream()
clipLengthTicker := time.NewTicker(30 * time.Second)
chunkLengthTicker := time.NewTicker(3 * time.Second)
cctx, cancel := context.WithCancel(context.TODO())
err := subscription.Receive(cctx, func(ctx context.Context, msg *pubsub.Message) {
select {
case <-clipLengthTicker.C:
log.Infof("Clip length reached.")
log.Infof("Closing stream and starting over")
err := stream.CloseSend()
if err != nil {
log.Fatalf("Could not close stream: %v", err)
}
go getResult(stream)
stream = prepareNewStream()
case <-chunkLengthTicker.C:
log.Infof("Chunk length reached.")
bytesConverted := len(msg.Data)
log.Infof("Received %d bytes\n", bytesConverted)
if bytesConverted > 0 {
if err := stream.Send(&speechpb.StreamingRecognizeRequest{
StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
AudioContent: transcodedChunk.Data,
},
}); err != nil {
resp, _ := stream.Recv()
log.Errorf("Could not send audio: %v", resp.GetError())
}
}
msg.Ack()
}
})I think the problem is that my 3 second chunks don’t necessarily line up with starts and end of phrases or sentences so I suspect that the Speech API is a recurrent neural network which has been trained on full sentences rather than individual words. So starting a clip in the middle of a sentence loses some data because it can’t figure out the first few words up to the natural end of a phrase. Also, I lose some data in changing from an old stream to a new stream. There’s some context lost. I guess overlapping clips might help with this.
I have a couple of questions :
1) Does this architecture seem appropriate for my constraints (unknown length of audio stream, etc.) ?
2) What can I do to improve accuracy and minimise lost data ?
(Note I’ve simplified the examples for readability. Point out if anything doesn’t make sense because I’ve been heavy handed in cutting the examples down.)
-
C++-How to capture MJPEG stream image from ip camera (Not H264 stream) [on hold]
25 juillet 2017, par ngân phạmI know that many IP cameras support both image streaming (ex : MJPEG) and H264 video streaming. I also know how to use OpenCV to capture H264 video. But I don’t know if VideoCapture class in OpenCV can also capture image from MJPEG stream. Or I have to use another library like FFmpeg or Libvlc. My camera is HIKvision DS-2CD2T42FWD-I8.
Thanks in advance