Recherche avancée

Médias (91)

Autres articles (75)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (11701)

  • Error while linking shared library FFMpegJNI to android project

    5 mars 2024, par Dari V

    I have built an ffmpeg module for all the architectures, but when I try to build project to run it on emulator (x86 architecture) i get an error, indicating that the files for x86 architecture are incompatible with elf_i386. The same is for other architectures, i get the error that the files are incompatible. What can i do in this case ?

    


    [2/2] Linking CXX shared library E:\android_studio_projects\projects\IPTVPlayer\app\build\intermediates\cxx\Debug\1h3dp6s4\obj\x86\libffmpegJNI.so
FAILED: E:/android_studio_projects/projects/IPTVPlayer/app/build/intermediates/cxx/Debug/1h3dp6s4/obj/x86/libffmpegJNI.so 
cmd.exe /C "cd . && C:\Users\volos\AppData\Local\Android\Sdk\ndk\25.1.8937393\toolchains\llvm\prebuilt\windows-x86_64\bin\clang++.exe --target=i686-none-linux-android30 --sysroot=C:/Users/volos/AppData/Local/Android/Sdk/ndk/25.1.8937393/toolchains/llvm/prebuilt/windows-x86_64/sysroot -fPIC -g -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security   -fno-limit-debug-info  -static-libstdc++ -Wl,--build-id=sha1 -Wl,--fatal-warnings -Wl,--gc-sections -Wl,--no-undefined -Qunused-arguments -shared -Wl,-soname,libffmpegJNI.so -o E:\android_studio_projects\projects\IPTVPlayer\app\build\intermediates\cxx\Debug\1h3dp6s4\obj\x86\libffmpegJNI.so CMakeFiles/ffmpegJNI.dir/E_/media3/media-release/libraries/decoder_ffmpeg/src/main/jni/ffmpeg_jni.cc.o  -landroid  E:/media3/media-release/libraries/decoder_ffmpeg/src/main/jni/ffmpeg/android-libs/x86/libswresample.a  E:/media3/media-release/libraries/decoder_ffmpeg/src/main/jni/ffmpeg/android-libs/x86/libavcodec.a  E:/media3/media-release/libraries/decoder_ffmpeg/src/main/jni/ffmpeg/android-libs/x86/libavutil.a  C:/Users/volos/AppData/Local/Android/Sdk/ndk/25.1.8937393/toolchains/llvm/prebuilt/windows-x86_64/sysroot/usr/lib/i686-linux-android/30/liblog.so  -latomic -lm && cd ."
ld: error: E:/media3/media-release/libraries/decoder_ffmpeg/src/main/jni/ffmpeg/android-libs/x86/libavcodec.a(mpegaudiotabs.o) is incompatible with elf_i386


    


  • FFmpeg Streaming Video SpringBoot endpoint not show video duration in video players

    7 avril 2024, par lxluxo23

    it turns out that I've been working on a personal project just out of curiosity.
the main function is to stream video by first encoding it through ffmpeg
then playback said video from any other device
call it "plex" very very primitive

    


    although I achieve my goal which is to encode and send the video to the devices that make the request
this video is sent so to speak as a live broadcast.
I can only pause it, no forward or rewind, anyone have any idea what I am doing wrong or if I should take some other approach either in my service or controller ?

    


    I leave fragments of my code

    


    THE CONTROLLER

    


    @RestController&#xA;@RequestMapping("/api")&#xA;@Log4j2&#xA;public class StreamController {&#xA;&#xA;    @Autowired&#xA;    VideoStreamingService videoStreamingService;&#xA;&#xA;    @Autowired&#xA;    VideoService videoService;&#xA;&#xA;&#xA;    @GetMapping("/stream/{videoId}")&#xA;    public ResponseEntity<streamingresponsebody> livestream(@PathVariable Long videoId,@RequestParam(required = false)  String codec) {&#xA;        Video video = videoService.findVideoById(videoId);&#xA;        if (video != null) {&#xA;            Codec codecEnum = Codec.fromString(codec);&#xA;            return ResponseEntity.ok()&#xA;                    .contentType(MediaType.valueOf("video/mp4"))&#xA;                    .body(outputStream -> videoStreamingService.streamVideo(video.getPath(), outputStream,codecEnum));&#xA;        }&#xA;        return ResponseEntity.notFound().build();&#xA;    }&#xA;}&#xA;</streamingresponsebody>

    &#xA;

    THE SERVICE

    &#xA;

    @Service&#xA;public class VideoStreamingService {&#xA;&#xA;    public void streamVideo(String videoPath, OutputStream outputStream, Codec codec) {&#xA;&#xA;        FFmpeg ffmpeg = FFmpeg.atPath()&#xA;                .addArguments("-i", videoPath)&#xA;                .addArguments("-b:v", "5000k")&#xA;                .addArguments("-maxrate", "5000k")&#xA;                .addArguments("-bufsize", "10000k")&#xA;                .addArguments("-c:a", "aac")&#xA;                .addArguments("-b:a", "320k")&#xA;                .addArguments("-movflags", "frag_keyframe&#x2B;empty_moov&#x2B;faststart")&#xA;                .addOutput(PipeOutput.pumpTo(outputStream)&#xA;                        .setFormat("mp4"))&#xA;                .addArgument("-nostdin");&#xA;        if (codec == Codec.AMD) {&#xA;            ffmpeg.addArguments("-profile:v", "high");&#xA;        }&#xA;        ffmpeg.addArguments("-c:v", codec.getFfmpegArgument());&#xA;        ffmpeg.execute();&#xA;    }&#xA;}&#xA;

    &#xA;

    I have some enums to vary the encoding and use hardware acceleration or not.

    &#xA;

    and here is an example from my player&#xA;the endpoint is the following

    &#xA;

    http://localhost:8080/api/stream/2?codec=AMD&#xA;screenshot

    &#xA;

    I'm not sure if there's someone with more knowledge in either FFmpeg or Spring who could help me with this minor issue, I would greatly appreciate it. I've included the URL of the repository in case anyone would like to review it when answering my question

    &#xA;

    repo

    &#xA;

    I tried changing the encoding format.&#xA;I tried copying the metadata from the original video.&#xA;I tried sending a custom header.&#xA;None of this has worked.

    &#xA;

    I would like to achieve what is mentioned in many sites, which is when you load a video from the network, the player shows how much of that video you have in the local "buffer".

    &#xA;

  • Seeking Ideas : How Can I Automatically Generate a TikTok Video from a Custom Song Using C# [closed]

    14 mai 2024, par Jamado

    Im creating a c# program which creates a video of the from a song and posts it on tiktok.

    &#xA;

    Right now my program

    &#xA;

      &#xA;
    1. Uses spleeter to split the song into stems

      &#xA;

    2. &#xA;

    3. uses a script of GitHub to create waveform images of the stems

      &#xA;

    4. &#xA;

    &#xA;

    I want my end video to look like this :

    &#xA;

    https://vm.tiktok.com/ZMM7CDmUt/ - only one song will play per video

    &#xA;

    https://vm.tiktok.com/ZMM7Xdw8b/

    &#xA;

    https://vm.tiktok.com/ZMM7CcGtE/ - no webcam or that hitting animations

    &#xA;

    basically I want the stems of the songs to be placed on top of a FL studio timeline, synced to the song, then I want to overlay a image on top of the video. and then to contribute for todays gen's 3 second attention span, add some audio virtualisations ontop of the fl studio recording (the music making app in the video) and a little shake to the image

    &#xA;

    I've tinkered with ffmpeg before, and I reckon it could do the trick here. I'd use the waveform pictures and mix them with a pre-recorded FL Studio video using ffmpeg's filters, like VStack to stack images, Scroll to slide them around and Blend. And then tweak the overlay filter for that shake effect. Plus, I found out ffmpeg can whip up some basic audio visualizations, which is neat. (https://gist.github.com/Neurogami/aeed8693f7ac375d5e013b8432d04d3f)

    &#xA;

    But my main issue with this approach is, how the waveform images will look weird/out of place ontop of the fl studio video, because FL studio has a really spesific "theme". I could manually create a template and then use some other library to merge the template image and the waveform image. But, it feels a bit janky and would probably be a hassle to set up and implement.

    &#xA;

    So, I'm curious if you folks have any nifty libraries, GitHub gems, or ideas to help me nail this video ?

    &#xA;