
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (65)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (12690)
-
Rails 5 - Video streaming using Carrierwave uploaded video size constraint on the server
21 mars 2020, par MilindI have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.
So far, what is working great is listed below -
- User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
- Once the video is uploaded on the server, it starts streaming it by :-
- FIRST,copying video on
/tmp
folder. - Running a bash script that uses
ffmpeg
to transcode uploaded video using predefined commands to produce new fragments of videos inside/tmp
folder. - Once the background job is done, all the videos are uploaded on AWS S3, which is how the default
carrierwave
works
- FIRST,copying video on
- So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to
S3
.
My questions, where i am looking some help are listed below -
1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my
/tmp
folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.
3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.
4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?
Thanks in advance.
-
Consume FFmpeg XCFramework from Objective-C, headers not found
19 novembre 2020, par jamoneI built FFmpeg for Apple's platforms as an XCFramework. I used the script in https://github.com/kewlbear/FFmpeg-iOS-build-script/pull/147 to do so.



I'm trying to now consume that framework inside a traditional iOS/macOS framework (named VideoEditing), that then is used inside my iOS app (soon to try and be Catalyst).



In
VideoEditing
I have linked toFFmpeg.xcframework
and then in the app that usesVideoEditing
I have linked & embeddedFFmpeg.xcframework
. Previously I was building FFmpeg as a standard static library, and using that from insideVideoEditing
in a Objective-C++ wrapper so I can use it all from Swift.


In that Objective-C++ file I would import FFmpeg headers like
#import <libswscale></libswscale>swscale.h>
To make that work, I had to set header search paths. How are you supposed to do it once you convert to the XCFramework ? I've tried@import FFmpeg
,#import <ffmpeg></ffmpeg>libswscale/swscale.h>
,#import <ffmpeg></ffmpeg>swscale.h>
as well as#import <libswscale></libswscale>swscale.h>
. In every case I just get afile not found
error on theimport
line.


All of Apple's examples are showing it just in Swift with the framework vending a module. If I was to try and still set a header search path, you now have different headers per architecture.





-
How to create a virtual webcamera device with ability to stream a video file in linux
27 février 2020, par VladI need to create a script, that will stream a video file to virtual webcam, and make a virtual webcam accessible not only with
ffplay /dev/videoX
etc. but also with different applications like Cheese(default Ubuntu webcam app), Chrome, Discord and others.
I tried solutions that usev4l2loopback
andffmpeg
, but this doesn’t work in Chrome/Discord/Cheese/...
Now I have several webcams :/dev/video0
- Native laptop camera/dev/video1
- A camera created with DroidCamX to use phones camera
as laptop camera/dev/video2
- v4l2loopback camera
When I try to stream a video file to
/dev/video2
with ffmpeg, this message is showing up :user@hostname:~/Videos$ sudo ffmpeg -re -i "video.mp4" -map 0:0 -f v4l2 /dev/video2
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.83.100
Duration: 02:09:13.10, start: 0.000000, bitrate: 279 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 798x598 [SAR 1:1 DAR 399:299], 146 kb/s, 24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
[v4l2 @ 0x5609f238c280] ioctl(VIDIOC_G_FMT): Invalid argument
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0
Conversion failed!But when I run that command with
/dev/video0
, a stream starts and shows withffplay /dev/video0
. But applications, except Mozilla Firefox cannot detect/dev/video0
stream.
So how can I create a virtual camera device properly in order to stream a video file to it and use as a camera in various applications ?I have Linux Ubuntu 18.04.4 LTS, kernel version : 5.3.0-40-generic, architecture : x86-64.
Any help appreciated !