
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (37)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (6269)
-
Rails 5 - Concurrent large Video uploads using Carrierwave eats up the server memory/space
22 mars 2020, par MilindI have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.
So far, what is working great is listed below -
- User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
- Once the video is uploaded on the server, it starts streaming it by :-
- FIRST,copying video on
/tmp
folder. - Running a bash script that uses
ffmpeg
to transcode uploaded video using predefined commands to produce new fragments of videos inside/tmp
folder. - Once the background job is done, all the videos are uploaded on AWS S3, which is how the default
carrierwave
works
- FIRST,copying video on
- So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to
S3
.
My questions, where i am looking some help are listed below -
1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my
/tmp
folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.
3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.
4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?
Thanks in advance.
-
Metadata when Remuxing MP3 Audiobooks into Apple-friendly MP4 with FFmpeg
23 août 2022, par CrissovSince there is apparently no way to tell iTunes or iOS that MP3s contain an audiobook (or radioplay) by ID3 tag or file extension, I would like to remux them into MPEG-4 Part 14 containers with an
.m4b
file extension (without converting, i.e. transcoding or reencoding, the audio stream to AAC) and set the proper media type tag (stik
= 2 Audiobook).


$ ffmpeg -hide_banner -y \
 -i "infile.mp3" -codec copy -map 0 \
 "outfile.m4b"




When auto-detecting the intended format from the output filename, FFmpeg (version 4.2.1 at the time of writing) toggles its
-f ipod
compatibility mode for.m4a
and.m4b
, which means it will apparently not accept MPEG 1/2 Layer 3 audio within an MP4 container :




[ipod @ 00000223bd927e40]

 Could not find tag for codec mp3 in stream #0, codec not currently supported in container

 Could not write header for output file #0 (incorrect codec parameters ?) : Invalid argument




I can override that (or change the file extension afterwards when using
"outfile.mp4"
) :


$ ffmpeg -hide_banner -y \
 -i "infile.mp3" -codec copy -map 0 -f mp4 \
 "outfile.m4b"




The near-zero time required for the conversion and FFprobe assure me that the remuxing was successful :





Stream #0:0(und): Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)






Custom ID3v2 tag fields and ones without a known MP4 cognate have been dropped, though. I would like to preserve them !



How do I do that with
-map_metadata
, if it is possible at all ?


How can I use
-metadata
to add the necessary tag field (atom :stik
) which would mark the file as an audiobook 
– phrased more generally :

how do I add a manually specified metadata tag field (e.g. MP4 atom or box) with FFmpeg ?


$ ffmpeg -hide_banner -y \
 -i "infile.mp3" -codec copy -map 0 -f mp4 \
 -metadata:s:a:0 language=deu \
 -metadata stik=2
 "outfile.m4b"




FFmpeg documentation





- 

-metadata[:metadata_specifier]
key=value (output,per-metadata)

 Set a metadata key/value pair.- …
-map_metadata[:metadata_spec_out] infile[:metadata_spec_in] (output,per-metadata)

 Set metadata information of the next output file from infile. Note that those are file indices (zero-based), not filenames. Optional metadata_spec_in/out parameters specify, which metadata to copy. A metadata specifier can have the following forms :
 
- 

g

 global metadata, i.e. metadata that applies to the whole files[:stream_spec]

 per-stream metadata. stream_spec is a stream specifier as described in the Stream specifiers chapter. In an input metadata specifier, the first matching stream is copied from. In an output metadata specifier, all matching streams are copied to.c:chapter_index

 per-chapter metadata. chapter_index is the zero-based chapter index.p:program_index

 per-program metadata. program_index is the zero-based program index.















 

If metadata specifier is omitted, it defaults to global.

 

By default, global metadata is copied from the first input file, per-stream and per-chapter metadata is copied along with streams/chapters. These default mappings are disabled by creating any mapping of the relevant type. A negative file index can be used to create a dummy mapping that just disables automatic copying.





PS



- 

- Apple does not seem to formally document
stik
. MPMediaType is slightly different. Pointers to the contrary would be greatly appreciated. - Ideally, I would like to automatically add all
*.mp3
files within a subdirectory sorted alphabetically (which share the same encoder settings) as chapters within a single.mp4
container, but that probably deserves a separate question.






-
Rails 5 - Concurrent large Video uploads and FFMPEG encoding in the background is making the server very slow
8 septembre 2022, par MilindI have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.



So far, what is working great is listed below -



- 

- User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
- Once the video is uploaded on the server, it starts streaming it by :-


- 

- FIRST,copying video on
/tmp
folder. - Running a bash script that uses
ffmpeg
to transcode uploaded video using predefined commands to produce new fragments of videos inside/tmp
folder. - Once the background job is done, all the videos are uploaded on AWS S3, which is how the default
carrierwave
works






- FIRST,copying video on
- So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to
S3
.









My questions, where i am looking some help are listed below -



1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my
/tmp
folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server's memory consumption ?


2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.



3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.



4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?



Thanks in advance.