
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (48)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (9220)
-
how to write subcommand to FFPLAY during playback using subprocess module ?
4 mars 2023, par ChienMouilleI'm trying to pause a video playback started with
FFPLAY
through a pythonsubprocess
. You can do this manually by pressing the "p" key on the keyboard while the video is playing. I'd like to emulate this behavior through a python call.

I'm now sending a "p" string, encoded as bytes, through the
stdin
of thePopen
call. The video starts and I can pause it with the keyboard but thecommunicate
command doesn't seem to do anything.

import subprocess
import time

proc = subprocess.Popen(['ffplay', 'PATH_TO_'],
 stdin=subprocess.PIPE,
 stdout=subprocess.PIPE,
 )
time.sleep(2) # just waiting to make sure playback has started
proc.communicate(input="p".encode())[0]




Thanks in advance !


-
Using ffmpeg with frei0r on windows 10. Can't find module from frei0r
21 février 2023, par DenisI am trying to add a filter to my video using ffmpeg and frei0r filters on windows 10. ffmpeg.exe is located in the system32 folder. ffmpeg works fine on other tasks. As soon as I try to add the frei0r filter, I get an error :


Could not find module 'glow'.
Error initializing filter 'frei0r' with args 'glow:20'



I downloaded the frei0r dll files from the official site and placed them in the system32 folder, and then placed them in another folder. Additionally, I registered the path :


set FREI0R_PATH=C:\WINDOWS\system32\frei0r-1



In cmd I enter the following command :


ffmpeg -loglevel debug -i 1.mp4 -vf "frei0r=glow:20" -t 10 1out.mp4



help me


I tried everything I saw online and nothing helped.


-
TypeError : expected str, bytes or os.PathLike object, not module when trying to sream openCv frames to rtmp server
30 novembre 2022, par seriouslyI am using openCv and face-recognition api to detect a face using a webcam then compare it with a previously taken image to check and see if the people on both images are the same and the openCv and face-recognition part of the code works properly now what I am trying to achieve is to stream the openCv processed video frames to an rtmp server so for this I am trying to use ffmpeg and running the command using subprocess but when I run the code I get error
TypeError: expected str, bytes or os.PathLike object, not module
. But I am writing the frames as bytes to stdin hencep.stdin.write(frame.tobytes())
. How can I fix it and properly stream my openCv frames to an rtmp server using ffmpeg. Thanks in advance.

Traceback (most recent call last):
 File "C:\Users\blah\blah\test.py", line 52, in <module>
 p = subprocess.Popen(command, stdin=subprocess.PIPE, shell=False)
 File "C:\Python310\lib\subprocess.py", line 969, in __init__
 self._execute_child(args, executable, preexec_fn, close_fds,
 File "C:\Python310\lib\subprocess.py", line 1378, in _execute_child
 args = list2cmdline(args)
 File "C:\Python310\lib\subprocess.py", line 561, in list2cmdline
 for arg in map(os.fsdecode, seq):
 File "C:\Python310\lib\os.py", line 822, in fsdecode
 filename = fspath(filename) # Does type-checking of `filename`.
TypeError: expected str, bytes or os.PathLike object, not module
</module>


import cv2
import numpy as np
import face_recognition
import os
import subprocess
import ffmpeg

path = '../attendance_imgs'
imgs = []
classNames = []
myList = os.listdir(path)

for cls in myList:
 curruntImg = cv2.imread(f'{path}/{cls}')
 imgs.append(curruntImg)
 classNames.append(os.path.splitext(cls)[0])

def findEncodings(imgs):
 encodeList = []
 for img in imgs:
 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
 encode = face_recognition.face_encodings(img)[0]
 encodeList.append(encode)
 return encodeList

encodeListKnown = findEncodings(imgs)
print('Encoding Complete')

cap = cv2.VideoCapture(0)

rtmp_url = "rtmp://127.0.0.1:1935/stream/webcam"

fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

# command and params for ffmpeg
command = [ffmpeg,
 '-y',
 '-f', 'rawvideo',
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(width, height),
 '-r', str(fps),
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-preset', 'ultrafast',
 '-f', 'flv',
 'rtmp://127.0.0.1:1935/stream/webcam']

p = subprocess.Popen(command, stdin=subprocess.PIPE, shell=False)


while True:
 ret, frame, success, img = cap.read()
 if not ret:
 print("frame read failed")
 break
 imgSmall = cv2.resize(img, (0,0), None, 0.25, 0.25)
 imgSmall = cv2.cvtColor(imgSmall, cv2.COLOR_BGR2RGB)

 currentFrameFaces = face_recognition.face_locations(imgSmall)
 currentFrameEncodings = face_recognition.face_encodings(imgSmall, currentFrameFaces)

 for encodeFace, faceLocation in zip(currentFrameEncodings, currentFrameFaces):
 matches = face_recognition.compare_faces(encodeListKnown, encodeFace)
 faceDistance = face_recognition.face_distance(encodeListKnown, encodeFace)
 matchIndex = np.argmin(faceDistance)

 if matches[matchIndex]:
 name = classNames[matchIndex].upper()
 y1, x2, y2, x1 = faceLocation
 y1, x2, y2, x1 = y1 * 4, x2 * 4, y2 * 4, x1 * 4 
 cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
 cv2.rectangle(img, (x1, y2 - 35), (x2, y2), (0, 255, 0), cv2.FILLED)
 cv2.putText(img, name, (x1 + 6, y2 - 6), cv2.FONT_HERSHEY_DUPLEX, 1, (255, 255, 255), 2) 

 # write to pipe
 p.stdin.write(frame.tobytes())