
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (30)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (5063)
-
OpenCV Encoding to H264 changing original RGB pixel values for gray images
13 janvier 2020, par CristoJVI have the following issue :
I’m creating a uniform gray color video (for testing) using OpenCV VideoWriter. The output video will reproduce a constant image where all the pixels must have the same value x (25, 51, 76,... and so on).
When I generate the video using MJPG Encoder :vw = cv2.VideoWriter('./videos/input/gray1.mp4',
cv2.VideoWriter_fourcc(*'MJPG'),
fps,(resolution[1],resolution[0]))and read the output using the VideoCapture class, everything just works fine. I got a frame array with all pixel values set to (25,51,76 and so on).
However when I generate the video using HEV1 (H.265) or also H264 :vw = cv2.VideoWriter('./videos/input/gray1.mp4',
cv2.VideoWriter_fourcc(*'HEV1'),
fps,(resolution[1],resolution[0]))I run into the following issue. The frame I got in BGR format follows the next configuration :
- The blue channel value is the expected value (x) minus 4 (25-4=21, 51-4=47, 76-4=72, and so on).
- The green channel is the expected value (x) minus 1 (25-1=24, 51-1=50, 76-1=75).
- The red channel is the expected value (x) minus 3 (25-3=22, 51-3=48, 76-3=73).
Notice that the value is reduced with a constant value of 4,1,3, independently of the pixel value (so there is a constant effect).
What I could explain is a pixel value dependable feature, instead of a fixed one.
What is worse is that if I choose to generate a video with frames consisting in every color (pixel values [255 0 0],[0 255 0] and [0 0 255]) I get the corresponding outputs values ([251 0 0],[0 254 0] and [0 0 252])
I though that this relation was related to the grayscale Y value, where :Y = 76/256 * RED + 150/256 * GREEN + 29/256 * BLUE
But this coefficients are not related with the output obtained. Maybe the problem is the reading with VideoCapture ?
EDIT :
In case that I want to have the same output value for the pixels (Ej : [10,10,10] experimentally I have to create a img where the red and blue channel has the green channel value plus 2 :value = 10
img = np.zeros((resolution[0],resolution[1],3),dtype=np.uint8)+value
img[:,:,2]=img[:,:,2]+2
img[:,:,1]=img[:,:,1]+0
img[:,:,0]=img[:,:,0]+2Anyone has experience this issue ? It is related to the encoding process or just that OpenCV treats the image differently, prior encoding, depending on the fourcc parameter value ?
-
ffmpeg - trying to add a reversed section of original video to the back (boomerang effect)
23 janvier 2020, par sn0epI have a ffmpeg command as follows which basically scales the video down to 720p. Now i want to add a section to my command which will make a concatination of the video but in reverse. So that the video will also be in a loop like this :
0s -> 10s -> 0s
original command :
ffmpeg -ss 0.0 -to 10.0 -i in.mp4 -filter_complex "fps=15,scale=720:-1" -y out.mp4
Command after my edits :
ffmpeg -ss 0.0 -to 10.0 -i in.mp4 -filter_complex "[0:v]fps=15,scale=720:-1,reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" -map "[v]" -y out.mp4
When executing im getting following erorrs :
Parsed_concat_4 @ 0x7feb89d01d40] Input link in1:v0 parameters (size 720x1280, SAR 1:1) do not match the corresponding output link in0:v0 parameters (1080x1920, SAR 1:1)
[Parsed_concat_4 @ 0x7feb89d01d40] Failed to configure output pad on Parsed_concat_4
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:1I’m very new into coding advanced ffmpeg commands.
-
Why does javaFx run converted mp4 but not original ?
20 février 2020, par bratSimply put, I have this issue where I have a mp4 file with a video. If I run it directly in the code posted here, the window opens but its white/blank. No movie showing.
But if I run the original movie through ffmpeg it works.ffmpeg -i original.mp4 -s hd720 converted.mp4
Note that Im converting from mp4 to mp4, I just change the size to hd720.
So, the question is, why do I need to do that ?My code :
package mypack;
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.layout.BorderPane;
import javafx.scene.media.Media;
import javafx.scene.media.MediaPlayer;
import javafx.scene.media.MediaView;
import javafx.stage.Stage;
public class MyPlayer extends Application{
public static void main(String[] args) {
launch(args);
}
public void start(Stage primaryStage) throws Exception {
BorderPane content = new BorderPane();
Scene scene = new Scene(content, 540, 209);
primaryStage.setScene(scene);
primaryStage.setScene(scene);
primaryStage.setTitle("Hello Media");
String source;
source = "file:///C:/Users/Noob/Desktop/Movies/original.mp4";
// source = "file:///C:/Users/Noob/Desktop/Movies/converted.mp4";
Media media = new Media(source);
MediaPlayer mediaPlayer = new MediaPlayer(media);
mediaPlayer.setAutoPlay(true);
MediaView mediaView = new MediaView(mediaPlayer);
((BorderPane) scene.getRoot()).getChildren().add(mediaView);
primaryStage.show();
}
}For those audio/video afficionados/ffmpeg pros out there. This is what gets output from the original.
ffprobe -i original.mp4
ffprobe version 4.2.2 Copyright (c) 2007-2019 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20200122
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000029bc02bc540] st: 0 edit list: 1 Missing key frame while searching for timestamp: 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000029bc02bc540] st: 0 edit list 1 Cannot find an index entry before timestamp: 0.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'original.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2020-02-18T17:09:21.000000Z
Duration: 00:00:22.56, start: 0.000000, bitrate: 1631 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 1624 kb/s, 29.97 fps, 29.97 tbr, 29970 tbn, 59.94 tbc (default)
Metadata:
creation_time : 2020-02-18T17:09:21.000000Z
handler_name : Mainconcept MP4 Video Media Handler
encoder : AVC CodingCould it have to do with size ? To big ? Since it sayes 3840x2160 in the original ? And Im making it smaller with hd720 ?
If so, I would expect at least a corner of the movie showing, instead of a totally blank window.
Im new at this, so if you also give some good link to ffmpeg (excluding official manpages) and/or a good tutorial/info page on video formatting pointed towards beginners it would be much appreciated. My understanding this far is there are containers which hold streams and each stream can have different codecs depending on the container.