
Recherche avancée
Autres articles (61)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (6420)
-
How to port signal() to sigaction() ?
28 septembre 2016, par Sharkdue to recent problems discovered with NDK12 and NDK13b2, i’m thinking of ’porting’ libx264’s use of signal() (and missing bsd_signal() in ndk12) to use sigaction() instead.
The problem is, I’m not quite sure what’s the simple&fastest way to replace signal() calls with sigaction() ones.
For all i see, it’s mainly used in x264-snapshot/common/cpu.c in the following manner :
using the following signal handler :
static void sigill_handler( int sig )
{
if( !canjump )
{
signal( sig, SIG_DFL );
raise( sig );
}
canjump = 0;
siglongjmp( jmpbuf, 1 );
}This is the problematic
x264_cpu_detect
function... currently, i’m guessing i only need to tackle the ARM version, but i’ ; ; still have to replace all occurances ofsignal()
withsigaction()
so i might just cover both of them to get the thing building...FYI - the NDK13 beta2 still has "unstable" libc and the build doesn’t fail on this part, but rather the first invocation of the
rand()
function somewhere else... So i’m out of luck and replacing the signal() calls might be better than just waiting for the official NDK13 release. I’m doing this to get rid of text-relocations so i can run the library (and doubango) on API 24 (Android N)the problematic part of function that invokes
signal()
:#elif SYS_LINUX
uint32_t x264_cpu_detect( void )
{
static void (*oldsig)( int );
oldsig = signal( SIGILL, sigill_handler );
if( sigsetjmp( jmpbuf, 1 ) )
{
signal( SIGILL, oldsig );
return 0;
}
canjump = 1;
asm volatile( "mtspr 256, %0\n\t"
"vand 0, 0, 0\n\t"
:
: "r"(-1) );
canjump = 0;
signal( SIGILL, oldsig );
return X264_CPU_ALTIVEC;
}
#endif
#elif ARCH_ARM
void x264_cpu_neon_test( void );
int x264_cpu_fast_neon_mrc_test( void );
uint32_t x264_cpu_detect( void )
{
int flags = 0;
#if HAVE_ARMV6
flags |= X264_CPU_ARMV6;
// don't do this hack if compiled with -mfpu=neon
#if !HAVE_NEON
static void (* oldsig)( int );
oldsig = signal( SIGILL, sigill_handler );
if( sigsetjmp( jmpbuf, 1 ) )
{
signal( SIGILL, oldsig );
return flags;
}
canjump = 1;
x264_cpu_neon_test();
canjump = 0;
signal( SIGILL, oldsig );
#endif
flags |= X264_CPU_NEON;
// fast neon -> arm (Cortex-A9) detection relies on user access to the
// cycle counter; this assumes ARMv7 performance counters.
// NEON requires at least ARMv7, ARMv8 may require changes here, but
// hopefully this hacky detection method will have been replaced by then.
// Note that there is potential for a race condition if another program or
// x264 instance disables or reinits the counters while x264 is using them,
// which may result in incorrect detection and the counters stuck enabled.
// right now Apple does not seem to support performance counters for this test
#ifndef __MACH__
flags |= x264_cpu_fast_neon_mrc_test() ? X264_CPU_FAST_NEON_MRC : 0;
#endif
// TODO: write dual issue test? currently it's A8 (dual issue) vs. A9 (fast mrc)
#endif
return flags;
}
#else
uint32_t x264_cpu_detect( void )
{
return 0;
}So the question is really this : what would be the quickest/easiest//fastest way to replace the
signal()
calls withsigaction()
ones while preserving the current functionality ?EDIT :
The reason i’m trying to get rid ofsignal()
are these build errors :/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function sigill_handler: error: undefined reference to 'bsd_signal'
/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function x264_cpu_detect: error: undefined reference to 'bsd_signal'
/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function x264_cpu_detect: error: undefined reference to 'bsd_signal'
/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function x264_cpu_detect: error: undefined reference to 'bsd_signal'I already know that this is a known NDK12 problem, that might be solved by bringing
bsd_signal
back to the libc in NDK13. However, in it’ beta state with it’s unstable libc - it’s currently missing the rand() function and simply waiting for it might not do the trick. But in the worst-case scenario, i guess i’ll just have to wait for it and retry after it’s release.But as it currently is, the prebuilt version of the library i want to use has text-relocations and is being rejected by phones running newer API / version of the android OS.
EDIT2 :
I also know thatsignal()
usually works by usingsigaction()
under the hood, but maybe i won’t get bsd_signal related build-errors... since i’m suspecting that this one isn’t using it. It’s obviously using bsd_signal, which may or may not be the same underlying thing :/ -
How to port signal() to sigaction() ?
28 septembre 2016, par Sharkdue to recent problems discovered with NDK12 and NDK13b2, i’m thinking of ’porting’ libx264’s use of signal() (and missing bsd_signal() in ndk12) to use sigaction() instead.
The problem is, I’m not quite sure what’s the simple&fastest way to replace signal() calls with sigaction() ones.
For all i see, it’s mainly used in x264-snapshot/common/cpu.c in the following manner :
using the following signal handler :
static void sigill_handler( int sig )
{
if( !canjump )
{
signal( sig, SIG_DFL );
raise( sig );
}
canjump = 0;
siglongjmp( jmpbuf, 1 );
}This is the problematic
x264_cpu_detect
function... currently, i’m guessing i only need to tackle the ARM version, but i’ ; ; still have to replace all occurances ofsignal()
withsigaction()
so i might just cover both of them to get the thing building...FYI - the NDK13 beta2 still has "unstable" libc and the build doesn’t fail on this part, but rather the first invocation of the
rand()
function somewhere else... So i’m out of luck and replacing the signal() calls might be better than just waiting for the official NDK13 release. I’m doing this to get rid of text-relocations so i can run the library (and doubango) on API 24 (Android N)the problematic part of function that invokes
signal()
:#elif SYS_LINUX
uint32_t x264_cpu_detect( void )
{
static void (*oldsig)( int );
oldsig = signal( SIGILL, sigill_handler );
if( sigsetjmp( jmpbuf, 1 ) )
{
signal( SIGILL, oldsig );
return 0;
}
canjump = 1;
asm volatile( "mtspr 256, %0\n\t"
"vand 0, 0, 0\n\t"
:
: "r"(-1) );
canjump = 0;
signal( SIGILL, oldsig );
return X264_CPU_ALTIVEC;
}
#endif
#elif ARCH_ARM
void x264_cpu_neon_test( void );
int x264_cpu_fast_neon_mrc_test( void );
uint32_t x264_cpu_detect( void )
{
int flags = 0;
#if HAVE_ARMV6
flags |= X264_CPU_ARMV6;
// don't do this hack if compiled with -mfpu=neon
#if !HAVE_NEON
static void (* oldsig)( int );
oldsig = signal( SIGILL, sigill_handler );
if( sigsetjmp( jmpbuf, 1 ) )
{
signal( SIGILL, oldsig );
return flags;
}
canjump = 1;
x264_cpu_neon_test();
canjump = 0;
signal( SIGILL, oldsig );
#endif
flags |= X264_CPU_NEON;
// fast neon -> arm (Cortex-A9) detection relies on user access to the
// cycle counter; this assumes ARMv7 performance counters.
// NEON requires at least ARMv7, ARMv8 may require changes here, but
// hopefully this hacky detection method will have been replaced by then.
// Note that there is potential for a race condition if another program or
// x264 instance disables or reinits the counters while x264 is using them,
// which may result in incorrect detection and the counters stuck enabled.
// right now Apple does not seem to support performance counters for this test
#ifndef __MACH__
flags |= x264_cpu_fast_neon_mrc_test() ? X264_CPU_FAST_NEON_MRC : 0;
#endif
// TODO: write dual issue test? currently it's A8 (dual issue) vs. A9 (fast mrc)
#endif
return flags;
}
#else
uint32_t x264_cpu_detect( void )
{
return 0;
}So the question is really this : what would be the quickest/easiest//fastest way to replace the
signal()
calls withsigaction()
ones while preserving the current functionality ?EDIT :
The reason i’m trying to get rid ofsignal()
are these build errors :/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function sigill_handler: error: undefined reference to 'bsd_signal'
/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function x264_cpu_detect: error: undefined reference to 'bsd_signal'
/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function x264_cpu_detect: error: undefined reference to 'bsd_signal'
/home/devshark/SCRATCH/doubango/thirdparties/android/armv5te/lib/dist/libx264.a(cpu.o):cpu.c:function x264_cpu_detect: error: undefined reference to 'bsd_signal'I already know that this is a known NDK12 problem, that might be solved by bringing
bsd_signal
back to the libc in NDK13. However, in it’ beta state with it’s unstable libc - it’s currently missing the rand() function and simply waiting for it might not do the trick. But in the worst-case scenario, i guess i’ll just have to wait for it and retry after it’s release.But as it currently is, the prebuilt version of the library i want to use has text-relocations and is being rejected by phones running newer API / version of the android OS.
EDIT2 :
I also know thatsignal()
usually works by usingsigaction()
under the hood, but maybe i won’t get bsd_signal related build-errors... since i’m suspecting that this one isn’t using it. It’s obviously using bsd_signal, which may or may not be the same underlying thing :/ -
Why does javaFx run converted mp4 but not original ?
20 février 2020, par bratSimply put, I have this issue where I have a mp4 file with a video. If I run it directly in the code posted here, the window opens but its white/blank. No movie showing.
But if I run the original movie through ffmpeg it works.ffmpeg -i original.mp4 -s hd720 converted.mp4
Note that Im converting from mp4 to mp4, I just change the size to hd720.
So, the question is, why do I need to do that ?My code :
package mypack;
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.layout.BorderPane;
import javafx.scene.media.Media;
import javafx.scene.media.MediaPlayer;
import javafx.scene.media.MediaView;
import javafx.stage.Stage;
public class MyPlayer extends Application{
public static void main(String[] args) {
launch(args);
}
public void start(Stage primaryStage) throws Exception {
BorderPane content = new BorderPane();
Scene scene = new Scene(content, 540, 209);
primaryStage.setScene(scene);
primaryStage.setScene(scene);
primaryStage.setTitle("Hello Media");
String source;
source = "file:///C:/Users/Noob/Desktop/Movies/original.mp4";
// source = "file:///C:/Users/Noob/Desktop/Movies/converted.mp4";
Media media = new Media(source);
MediaPlayer mediaPlayer = new MediaPlayer(media);
mediaPlayer.setAutoPlay(true);
MediaView mediaView = new MediaView(mediaPlayer);
((BorderPane) scene.getRoot()).getChildren().add(mediaView);
primaryStage.show();
}
}For those audio/video afficionados/ffmpeg pros out there. This is what gets output from the original.
ffprobe -i original.mp4
ffprobe version 4.2.2 Copyright (c) 2007-2019 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20200122
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000029bc02bc540] st: 0 edit list: 1 Missing key frame while searching for timestamp: 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000029bc02bc540] st: 0 edit list 1 Cannot find an index entry before timestamp: 0.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'original.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2020-02-18T17:09:21.000000Z
Duration: 00:00:22.56, start: 0.000000, bitrate: 1631 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 1624 kb/s, 29.97 fps, 29.97 tbr, 29970 tbn, 59.94 tbc (default)
Metadata:
creation_time : 2020-02-18T17:09:21.000000Z
handler_name : Mainconcept MP4 Video Media Handler
encoder : AVC CodingCould it have to do with size ? To big ? Since it sayes 3840x2160 in the original ? And Im making it smaller with hd720 ?
If so, I would expect at least a corner of the movie showing, instead of a totally blank window.
Im new at this, so if you also give some good link to ffmpeg (excluding official manpages) and/or a good tutorial/info page on video formatting pointed towards beginners it would be much appreciated. My understanding this far is there are containers which hold streams and each stream can have different codecs depending on the container.