Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (54)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (9062)

  • Need help to create a queue system with a discord.py bot

    19 août 2024, par Zamv

    im trying to create a bot with discord.py, i'll start saying that the bot is for personal use so please don't point out that i shouldn't use ydl in the comments, ty :
Also, im a beginner, so please tell me what I can improve and any errors there are, I will be happy to listen.
That said, i've been stuck on trying to create this queue system for songs for the past 3 days, the bot was working perfectly fine and played some urls before i implemented the play_next function, now it doesn't even play the first song, even if it sends confirmation messages both in the terminal that in the discord channel, i know that the code might be a little confusing, but please i really need to know how to make this work

    


    i wrote two main functions : "play_next" to play the next song in the queue and "play", the main command.
Here is my play_next function :

    


    async def play_next(self,ctx):
        if not ctx.voice_client.is_playing: #(don't know if this is correct as I also check if the bot isn't                     playing anything in the play function)
            
            if self.song.queue: 
                next_song = self.song_queue.pop(0) #deleting the first element of the queue, so if we have (song_0, song_1 and song_2) the bot should delete (song_0) making in fact song_1 the "next" song_0
                try:
                    ctx.voice_client.play(discord.FFmpegPCMAudio(next_song), 
                                          after=lambda e: self.bot.loop.create_task(self.play_next(ctx))) #Not gonna lie here, i asked chat gpt for this line, it should make the bot automatically play the next song one after another, correct me if I am wrong

#this is not important
                    embed = discord.Embed(
                        title="Song",
                        description=f"Now playing {next_song}",
                        color = 0x1DB954
                    )
                    await ctx.send(embed=embed)
                except Exception as e:
                    print(f"Error playing the next song: {e}")

            else:
                await ctx.voice_client.disconnect()  # disconnecting from the voice channel if the queue is empty
                print("Disconnected from the voice channel as the queue is empty.")


    


    I think that the main problem of my bot is the play_next function, but here it is my "play" function :

    


        @commands.command(pass_context=True)
    async def play(self, ctx, url: str):  
        if ctx.author.voice:  # first checking if the user is in a voice channel
            if not ctx.voice_client: #then checking if the bot is already connected to a voice channel
                channel = ctx.message.author.voice.channel 
                try:
                    await channel.connect()  #then joining
                    print("I joined the voice channel!")
                except Exception as e:
                    print(f"Failed to connect to the voice channel: {e}")
                    return

            song_path = f"song_{self.song_index}.mp3" #note, im using cogs, i declared self.song_index with the value of 0 
            self.song_index += 1 #i thought that this was the easiest way of creating a new file for each song

            ydl_opts = {
                'format': 'bestaudio/best',
                'postprocesors': [{
                    'key': 'FFmpegExtractAudio',
                    'preferredcodec': 'mp3',
                    'preferredquality': '192',
                }],
                'outtmpl': song_path
            }
            try:
                with youtube_dl.YoutubeDL(ydl_opts) as ydl:
                    print("Starting download...")
                    ydl.download([url]) #this is where it should download the song provided by the url
                    print("Download finished.")
            except Exception as e:
                await ctx.send(f"Failed to download the song: {e}")
                return

#i think that the next two if statements are the main threats
            if os.path.exists(song_path): #if there is atleast 1 song in the queue
                self.song_queue.append(song_path) #append the next one
                embed = discord.Embed(
                    title="Added to Queue!",
                    description = f"{url} has been added to the queue.",
                    color=0x33ff33
                )
                await ctx.send(embed=embed)


                if not ctx.voice_client.is_playing(): #checking if the bot is already playing something, don't know if i should put this if statement here
                    print("Starting playback...")
                    await self.play_next(ctx) #if nothing is playing, then the bot should play the next song



            else:
                await ctx.send("Failed to download or find the audio file.")

        else:
            embed = discord.Embed(
                title="❌You must be in a voice channel",
                color=0xff6666
            )
            await ctx.send(embed=embed)


    


    thanks in advance, i will try to read and answer any questions as soon as possible

    


  • Anomaly in raw I420 video generated by GStreamer

    16 avril 2024, par Lea

    Situation

    


    I'm trying to convert RGBA image data to YUV420P in multiple threads, then send this data to a main thread which splits the data it receives from each thread into separate frames and combines them in order to a video. Currently, I'm using FFmpeg for this task but I've found GStreamer to do a quicker job at colorspace conversion than FFmpeg.

    


    Problem

    


    The raw video generated by GStreamer does not match the expectations for YUV 4:2:0 planar video data. To test this, I've made a raw RGBA test video of 3 red 4x4 (16 pixel) frames.

    


    ffmpeg -f lavfi -i color=color=red -t 3 -r 1 -s 4x4 -f rawvideo -pix_fmt rgba ./input.rgba


    


    Example data

    


    FFmpeg

    


    Now, first trying to convert it via FFmpeg as I'm doing it currently :

    


    ffmpeg -f rawvideo -pix_fmt rgba -s 4x4 -i input.rgba -f rawvideo -pix_fmt yuv420p ./ffmpeg.yuv420p


    


    This creates a 72 byte file => 1.5 bytes per pixel, 24 bytes per frame : As expected for yuv420p data.

    


    $ hexdump -C ./ffmpeg.yuv420p 
00000000  51 51 51 51 50 50 50 50  50 50 50 50 50 50 50 50  |QQQQPPPPPPPPPPPP|
00000010  5b 5b 5b 5b ee ee ee ee  51 51 51 51 50 50 50 50  |[[[[....QQQQPPPP|
00000020  50 50 50 50 50 50 50 50  5b 5b 5b 5b ee ee ee ee  |PPPPPPPP[[[[....|
00000030  51 51 51 51 50 50 50 50  50 50 50 50 50 50 50 50  |QQQQPPPPPPPPPPPP|
00000040  5b 5b 5b 5b ee ee ee ee                           |[[[[....|


    


    GStreamer

    


    Now trying to do the same via GStreamer, with the I420 format which corresponds to yuv420p as per their documentation :

    


    gst-launch-1.0 filesrc location=./input.rgba ! rawvideoparse format=rgba width=4 height=4 \
! videoconvert ! video/x-raw,format=I420 ! filesink location=./gstreamer.yuv420p


    


    This creates a 96 byte file => 2 bytes per pixel, 32 bytes per frame (?) : Unusual for yuv420p data. Additionally, none of the sections match the FFmpeg output, ruling out some kind of padding.

    


    $ hexdump -C ./gstreamer.yuv420p 
00000000  50 50 50 50 50 50 50 50  50 50 50 50 50 50 50 50  |PPPPPPPPPPPPPPPP|
00000010  5a 5a 00 00 5a 5a 00 00  ee ee 00 00 ed ed 00 00  |ZZ..ZZ..........|
00000020  50 50 50 50 50 50 50 50  50 50 50 50 50 50 50 50  |PPPPPPPPPPPPPPPP|
00000030  5a 5a 63 6b 5a 5a 77 62  ee ee 78 6d ed ed 2c 20  |ZZckZZwb..xm.., |
00000040  50 50 50 50 50 50 50 50  50 50 50 50 50 50 50 50  |PPPPPPPPPPPPPPPP|
00000050  5a 5a 00 00 5a 5a 00 00  ee ee 00 00 ed ed 00 00  |ZZ..ZZ..........|


    


    This output can also not be interpreted correctly as yuv420p by FFmpeg, leading to corrupted frames when trying to do so :

    


    ffmpeg -f rawvideo -pix_fmt yuv420p -s 4x4 -i gstreamer.yuv420p -f image2 "./%d.png"


    


    Corrupted yuv420p frames generated by GStreamer

    


    Solution ?

    


    For my personal problem I need a way to chop up raw I420 video generated by GStreamer into separate frames to work with. However, I would also like to understand why GStreamer behaves this way and which key piece I'm missing here.

    


    Additional notes

    


    I've ruled out an issue with the input in GStreamer, as piping it to autovideosink leads to a normal result. I'm also aware of multifilesink, but I would like to avoid writing to disk and rather work with the data directly in buffers.

    


  • FFmpeg Streaming Video SpringBoot endpoint not show video duration in video players

    7 avril 2024, par lxluxo23

    it turns out that I've been working on a personal project just out of curiosity.
the main function is to stream video by first encoding it through ffmpeg
then playback said video from any other device
call it "plex" very very primitive

    


    although I achieve my goal which is to encode and send the video to the devices that make the request
this video is sent so to speak as a live broadcast.
I can only pause it, no forward or rewind, anyone have any idea what I am doing wrong or if I should take some other approach either in my service or controller ?

    


    I leave fragments of my code

    


    THE CONTROLLER

    


    @RestController&#xA;@RequestMapping("/api")&#xA;@Log4j2&#xA;public class StreamController {&#xA;&#xA;    @Autowired&#xA;    VideoStreamingService videoStreamingService;&#xA;&#xA;    @Autowired&#xA;    VideoService videoService;&#xA;&#xA;&#xA;    @GetMapping("/stream/{videoId}")&#xA;    public ResponseEntity<streamingresponsebody> livestream(@PathVariable Long videoId,@RequestParam(required = false)  String codec) {&#xA;        Video video = videoService.findVideoById(videoId);&#xA;        if (video != null) {&#xA;            Codec codecEnum = Codec.fromString(codec);&#xA;            return ResponseEntity.ok()&#xA;                    .contentType(MediaType.valueOf("video/mp4"))&#xA;                    .body(outputStream -> videoStreamingService.streamVideo(video.getPath(), outputStream,codecEnum));&#xA;        }&#xA;        return ResponseEntity.notFound().build();&#xA;    }&#xA;}&#xA;</streamingresponsebody>

    &#xA;

    THE SERVICE

    &#xA;

    @Service&#xA;public class VideoStreamingService {&#xA;&#xA;    public void streamVideo(String videoPath, OutputStream outputStream, Codec codec) {&#xA;&#xA;        FFmpeg ffmpeg = FFmpeg.atPath()&#xA;                .addArguments("-i", videoPath)&#xA;                .addArguments("-b:v", "5000k")&#xA;                .addArguments("-maxrate", "5000k")&#xA;                .addArguments("-bufsize", "10000k")&#xA;                .addArguments("-c:a", "aac")&#xA;                .addArguments("-b:a", "320k")&#xA;                .addArguments("-movflags", "frag_keyframe&#x2B;empty_moov&#x2B;faststart")&#xA;                .addOutput(PipeOutput.pumpTo(outputStream)&#xA;                        .setFormat("mp4"))&#xA;                .addArgument("-nostdin");&#xA;        if (codec == Codec.AMD) {&#xA;            ffmpeg.addArguments("-profile:v", "high");&#xA;        }&#xA;        ffmpeg.addArguments("-c:v", codec.getFfmpegArgument());&#xA;        ffmpeg.execute();&#xA;    }&#xA;}&#xA;

    &#xA;

    I have some enums to vary the encoding and use hardware acceleration or not.

    &#xA;

    and here is an example from my player&#xA;the endpoint is the following

    &#xA;

    http://localhost:8080/api/stream/2?codec=AMD&#xA;screenshot

    &#xA;

    I'm not sure if there's someone with more knowledge in either FFmpeg or Spring who could help me with this minor issue, I would greatly appreciate it. I've included the URL of the repository in case anyone would like to review it when answering my question

    &#xA;

    repo

    &#xA;

    I tried changing the encoding format.&#xA;I tried copying the metadata from the original video.&#xA;I tried sending a custom header.&#xA;None of this has worked.

    &#xA;

    I would like to achieve what is mentioned in many sites, which is when you load a video from the network, the player shows how much of that video you have in the local "buffer".

    &#xA;