Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (60)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7438)

  • How to obtain time markers for video splitting using python/OpenCV

    30 mars 2016, par Bleddyn Raw-Rees

    Hi I’m new to the world of programming and computer vision so please bare with me.

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()

    Thanks !

  • FFmpeg doesn't play audio, yet no error shown

    4 août 2023, par Kristupas

    So i'm learning Python and don't know much about FFmpeg. I am following a tutorial, which explains everything very clearly. Everything is working, with one exception. Whenever I try to get it to play a sound, it won't.. Here's what it is saying :

    


    INFO     discord.player ffmpeg process 2540 successfully terminated with return code of 1.

    


    And here's my code (forgive me for all of the childish things in there, i'm just trying out different features) :

    


    
import discord
import discord.ext
from discord import FFmpegPCMAudio
from discord.ext import commands
import random


Token = "No token for you :)"
client = commands.Bot(command_prefix = '!', intents=discord.Intents.all())


@client.event
async def on_ready():
    print(f"we're rolling as {client.user} \n")
    channel = client.get_channel(1022535885851459727)
    await channel.send("Tremble before my might hoomans😤😤")

#Member events:

@client.event
async def on_member_join(member):
    await member.send("Ok comrade, welcome to bot lab, pls not leave. Anways here rules \n1. No swearing \n2. No cursing \n3. No bullying, the owner is a crybaby \n4. No following the rules (u get banned if this one is broken)")
    channel = client.get_channel(1136658873688801302)
    jokes = [f"A failure known as {member} has joined this chat!", 
             f"Another {member} has joined the channel", 
             f"A {member} spawned", 
             f'cout << "{member} has joined teh chat" << endl;', 
             f"OUR great {member} has come to save us" ]
    await channel.send(jokes[random.randint(0,len(jokes))])

@client.event 
async def on_member_remove(member):
    await member.send("Bye our dear comrade")
    channel = client.get_channel(1136663317738442752)
    await channel.send(f"{member} has left the chat :(.)")

#Client commands:
    
@client.command()
async def hello(ctx):
    await ctx.send("Hello, I am pro bot")

@client.command()
async def byelol(ctx):
    await ctx.send("Bye, I am pro bot")
@client.command()
async def ping(ctx):
    await ctx.send(f"**pong** {ctx.message.author.mention}")


@client.event
async def on_message(message):
    message.content = message.content.lower()
    await client.process_commands(message)


#voice channel commands:

@client.command(pass_context = True)
async def micup(ctx):
    if (ctx.author.voice):
        await ctx.send(f"Joining on {ctx.message.author}'s command")
        channel = ctx.message.author.voice.channel
        voice = await channel.connect()
        source = FFmpegPCMAudio('Bluetooth.wav')
        player = voice.play(source)
        
        
        
    else:
        await ctx.send("No.")



@client.command(pass_Context = True)
async def leave(ctx):
    if (ctx.voice_client):
        await ctx.send(f"Leaving on {ctx.message.author}'s command")
        await ctx.guild.voice_client.disconnect()
    else:
        await ctx.send("Nyet. Im not in voice chat u stoopid hooman")


@client.command(pass_Context = True)
async def pause(ctx):
    voice = discord.utils.get(client.voice_clients, guild = ctx.guild)
    if voice.is_playing():
        await ctx.send("Pausing..⏸")
        voice.pause()
    else:
        await ctx.send("I don't think I will.")

@client.command(pass_Context = True)
async def resume(ctx):
    voice = discord.utils.get(client.voice_clients, guild = ctx.guild)
    if voice.is_paused():
        await ctx.send("My ears are bleeding")
        voice.resume()
    else:
        await ctx.send("ALREADY BLASTING MUSIC")

@client.command(pass_Context = True)
async def stop(ctx):
    voice = discord.utils.get(client.voice_clients, guild = ctx.guild)
    await ctx.send("You can stop the song, but you can't stop me!")
    voice.stop()

@client.command(pass_Context = True)
async def play(ctx, arg):
    await ctx.send("Playing..")
    voice = ctx.guild.voice_client
    source = FFmpegPCMAudio(arg)
    player = voice.play(source)

if '__main__' == __name__:
    client.run(Token)


    


    I tried installing different versions of ffmpeg, still nothing. I tried to run the code outside of my venv, but still nothing (i doubt that it's the problem). I changed the path to it to different folders, still nothing.
The only time it DID work is when i entered a full path, but then when you want to play something, you wouldn't want to say " !play D:_Python\DiscordBot\Bluetooth.wav". From what i've seen, it's possible to play it by just saying " !play Bluetooth.wav".

    


    So long story short : I want to make it so that the path i have to specify is just the file name. And when I do, it doesn't play the sound.
(sorry if this is a dupe question, i just couldn't find anything understandable for my amateur brain)

    


  • How to obtain time markers for video splitting using python/OpenCV

    10 novembre 2018, par Bleddyn Raw-Rees

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()