Recherche avancée

Médias (91)

Autres articles (53)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9038)

  • Splitting audio tracks with incorrect length - FFMPEG

    26 mars 2018, par channae

    Version : com.writingminds:FFmpegAndroid:0.3.2

    I have an audio file with length 43 seconds. And I wrote an algorithm to split at each 10 seconds mark where a word ends (For this I used IBM Watson to get ending timestamp). So cropping duration is always around 10 seconds to 11 seconds. Of course except the 5th one. I have printed my commands so that you will understand my use-case better.

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:00.000 -codec copy -t 00:00:10.010 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_1.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:10.010 -codec copy -t 00:00:21.090 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_2.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:21.090 -codec copy -t 00:00:30.480 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_3.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:30.480 -codec copy -t 00:00:40.120 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_4.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:40.120 -codec copy -t 00:00:43.000 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_5.wav

    However when playing all cropped audio files I noticed segment_1 is about 10 seconds and segment_2 is about 20 seconds etc. Therefore some of the audio parts belong to segment_1 also available in segment 2 etc etc. Why is this happening ?

    Appreciate your response.

  • FFmpeg Streaming Video SpringBoot endpoint not show video duration in video players

    7 avril 2024, par lxluxo23

    it turns out that I've been working on a personal project just out of curiosity.
the main function is to stream video by first encoding it through ffmpeg
then playback said video from any other device
call it "plex" very very primitive

    


    although I achieve my goal which is to encode and send the video to the devices that make the request
this video is sent so to speak as a live broadcast.
I can only pause it, no forward or rewind, anyone have any idea what I am doing wrong or if I should take some other approach either in my service or controller ?

    


    I leave fragments of my code

    


    THE CONTROLLER

    


    @RestController&#xA;@RequestMapping("/api")&#xA;@Log4j2&#xA;public class StreamController {&#xA;&#xA;    @Autowired&#xA;    VideoStreamingService videoStreamingService;&#xA;&#xA;    @Autowired&#xA;    VideoService videoService;&#xA;&#xA;&#xA;    @GetMapping("/stream/{videoId}")&#xA;    public ResponseEntity<streamingresponsebody> livestream(@PathVariable Long videoId,@RequestParam(required = false)  String codec) {&#xA;        Video video = videoService.findVideoById(videoId);&#xA;        if (video != null) {&#xA;            Codec codecEnum = Codec.fromString(codec);&#xA;            return ResponseEntity.ok()&#xA;                    .contentType(MediaType.valueOf("video/mp4"))&#xA;                    .body(outputStream -> videoStreamingService.streamVideo(video.getPath(), outputStream,codecEnum));&#xA;        }&#xA;        return ResponseEntity.notFound().build();&#xA;    }&#xA;}&#xA;</streamingresponsebody>

    &#xA;

    THE SERVICE

    &#xA;

    @Service&#xA;public class VideoStreamingService {&#xA;&#xA;    public void streamVideo(String videoPath, OutputStream outputStream, Codec codec) {&#xA;&#xA;        FFmpeg ffmpeg = FFmpeg.atPath()&#xA;                .addArguments("-i", videoPath)&#xA;                .addArguments("-b:v", "5000k")&#xA;                .addArguments("-maxrate", "5000k")&#xA;                .addArguments("-bufsize", "10000k")&#xA;                .addArguments("-c:a", "aac")&#xA;                .addArguments("-b:a", "320k")&#xA;                .addArguments("-movflags", "frag_keyframe&#x2B;empty_moov&#x2B;faststart")&#xA;                .addOutput(PipeOutput.pumpTo(outputStream)&#xA;                        .setFormat("mp4"))&#xA;                .addArgument("-nostdin");&#xA;        if (codec == Codec.AMD) {&#xA;            ffmpeg.addArguments("-profile:v", "high");&#xA;        }&#xA;        ffmpeg.addArguments("-c:v", codec.getFfmpegArgument());&#xA;        ffmpeg.execute();&#xA;    }&#xA;}&#xA;

    &#xA;

    I have some enums to vary the encoding and use hardware acceleration or not.

    &#xA;

    and here is an example from my player&#xA;the endpoint is the following

    &#xA;

    http://localhost:8080/api/stream/2?codec=AMD&#xA;screenshot

    &#xA;

    I'm not sure if there's someone with more knowledge in either FFmpeg or Spring who could help me with this minor issue, I would greatly appreciate it. I've included the URL of the repository in case anyone would like to review it when answering my question

    &#xA;

    repo

    &#xA;

    I tried changing the encoding format.&#xA;I tried copying the metadata from the original video.&#xA;I tried sending a custom header.&#xA;None of this has worked.

    &#xA;

    I would like to achieve what is mentioned in many sites, which is when you load a video from the network, the player shows how much of that video you have in the local "buffer".

    &#xA;

  • Cut video in chunks at nearest keyframe to the nearest scene change (ffmpeg)

    11 janvier 2024, par Toxiro

    I want to cut a video in let’s say approximately 30 minute chunks. I first want to find the nearest scene change each 30 minutes and then cut at the nearest keyframe to that scene changes (without reencoding).

    &#xA;

    I guess I can’t do that in a single ffmpeg command, but I maybe here are some pros with very helpful tips. Is it possible for example to just output the nearest scene change (or maybe just the next scene change) at a timecode, so that ffmpeg does not need to analyze the whole video ?

    &#xA;

    I hope someone can help. If I have a solution, probably a bash script, I will post it here.

    &#xA;