
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (77)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (9630)
-
How to create video with dynamic pictures like Facebook Friend's Day video
21 février 2019, par ViniciusHow to create a video with pictures from a user just like Facebook does with their Facebook Friend’s day video ?
Example : https://www.youtube.com/watch?v=mNWJ_XxfQfU
The intention is to generate this sort of video with images that a visitor will upload to a website. FFMPEG does support images animation but it seems that a bit of 3d animations like the above needs ages to be done with FFMPEG. I wonder if there is an alternative to FFMPEG that can generate this sort of animations, or maybe a software like After Effects that can either generate a template that can be used on ffmpeg (or any command line alternative) or has its own command-line interface that can be executed in a linux machine to do such thing.
Basically, the user would upload the pictures, the server would crop them to have the same size and then it would convert it to a video like the one mentioned above.
-
How to serve dynamic m3u8 playlist (HLS files) via node.js server to React using axios requests ?
30 septembre 2021, par dev_2k20I am devising a VOD streaming server with Node.js. Streaming server delegates creation of HLS playlist to fluent-ffmpeg that converts and encrypts mp4 video file to m3u8 playlist. I am using hls.js client library in React for playing HLS videos with the following code


import React, { Component } from "react";
import Hls from "hls.js";

class Player extends Component {
 componentDidMount() {
 if (Hls.isSupported() && this.player) {
 const video = this.player;
 const hls = new Hls({ enableWorker: false });
 hls.loadSource(
 "somelink"
 );
 hls.attachMedia(video);
 hls.on(Hls.Events.MANIFEST_PARSED, function () {
 video.play();
 });
 }
 }

 render() {
 return (
 > (this.player = player)}
 autoPlay={true}
 />
 );
 }
}
export default Player;



and I have created a node-server using hls-server, and I am serving static playlist as follows
I have yet to serve files to React, atm I am using simple html for serving video playlist


app.get("/", (req, res) => {
 return res.status(200).sendFile(`${__dirname}/client.html`);
});



I want to serve playlist to client-side in React, and one way of doing that would be to add a source to route link, but I am confused wrt adding an axios get request that requests node-server to serve m3u8 playlist in streams as response, and stream the data on React-side (as currently used on client.html). In client.html, I have assigned location to files as video source for hls.js.


componentDidMount() {
 if (Hls.isSupported() && this.player) {
 const video = this.player;
 const hls = new Hls({ enableWorker: false });
 hls.loadSource("http://localhost:3002/live/videoStream/somePlaylist.m3u8");
 hls.attachMedia(video);
 hls.on(Hls.Events.MANIFEST_PARSED, function () {
 video.play();
 });
 }
 }



Am I correct in assuming, that axios will request server for each chunk of video ?
In my mind, the process shall be as follows (pseudo-code)


React-side


componentDidMount() {
 if (Hls.isSupported() && this.player) {
 axios.get(`/live/video/courseA`).then((res) => {
 const video = this.player;
 const hls = new Hls({ enableWorker: false });
 hls.loadSource(
 res.data
 );
 hls.attachMedia(video);
 hls.on(Hls.Events.MANIFEST_PARSED, function () {
 video.play();
 });
 })
 
 }
 }



Server-side


app.get('/live/video/:videoPlaylist', async (req, res, next) => {
 const file = fs.createReadStream('path to playlist');
 file.pipe(res);
})



Since the VOD server-side route will be used for playing select playlist, I am assuming that in react, axios will request server-side for video, and on server-side, via streaming I will pipe each file chunk as response.


Any help will be appreciated !


Thanks in advance.


-
Normalize video contrast to the full dynamic range with ffmpeg ?
18 mai 2017, par RoofusThe following ImageMagick command line normalizes the RGB channels of an image individually, so that in each channel, the smallest value maps to 0 and the largest value maps to 255 :
convert fish.jpg -channel all -contrast-stretch 0.003x0.003% fish2.jpg
Is there an ffmpeg filter which can normalize the RGB channels of every individual frame of a video ?
The only filter I can find is "histeq", which equalizes (flattens) rather than normalizes the contrast. I have applied it per RGB channel using variations on this command line :
ffmpeg -i fish.jpg -vf "format=rgb24,extractplanes=r+g+b[r][g][b],[r]histeq=strength=.1[r2],[g]histeq=strength=.1[g2],[b]histeq=strength=.1[b2],[g2][b2][r2]mergeplanes=0x001020:gbrp" fish3.jpg
but since it tries to flatten the histogram, it always gives a much different (unacceptable) result ; for example, the imgur image whose code is eHL51.jpg (not enough reputation for link).
Based on this answer :
video normalization with ffmpeg
I have also tested -vf "scale=out_range=full" and -vf "pp=al:f" (see the log below) but in both cases the result fish3.jpg was unchanged from fish.jpg, so apparently nothing was done.C:\Users\Roofus\Desktop>ffmpeg -i fish.jpg -vf scale=out_range=full -color_range 2 -pix_fmt yuvj420p fish3.jpg
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-
avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libb
luray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-l
ibmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libope
njpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enabl
e-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable
-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zli
b
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Input #0, image2, from 'fish.jpg':
Duration: 00:00:00.04, start: 0.000000, bitrate: 3942 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 960x540 [SAR 96:96 DAR 16:9], 25 tbr, 25 tbn, 25 t
bc
[swscaler @ 0000000000442f60] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'fish3.jpg':
Metadata:
encoder : Lavf57.56.101
Stream #0:0: Video: mjpeg, yuvj420p(pc), 960x540 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
Metadata:
encoder : Lavc57.64.101 mjpeg
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=2.2 Lsize=N/A time=00:00:00.04 bitrate=N/A speed= 5x
video:23kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
C:\Users\Roofus\Desktop>fish3.jpg
C:\Users\Roofus\Desktop>ffmpeg -i fish.jpg -vf "pp=al:f" -color_range 2 -pix_fmt yuvj420p fish3.jpg
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-
avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libb
luray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-l
ibmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libope
njpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enabl
e-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable
-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zli
b
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Input #0, image2, from 'fish.jpg':
Duration: 00:00:00.04, start: 0.000000, bitrate: 3942 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 960x540 [SAR 96:96 DAR 16:9], 25 tbr, 25 tbn, 25 t
bc
File 'fish3.jpg' already exists. Overwrite ? [y/N] y
[Parsed_pp_0 @ 0000000000321da0] This syntax is deprecated. Use '|' to separate the list items.
Output #0, image2, to 'fish3.jpg':
Metadata:
encoder : Lavf57.56.101
Stream #0:0: Video: mjpeg, yuvj420p(pc), 960x540 [SAR 96:96 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
Metadata:
encoder : Lavc57.64.101 mjpeg
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=2.2 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=4.44x
video:23kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
C:\Users\Roofus\Desktop>fish3.jpg