
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (66)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (10864)
-
Django api returns Gif as JPG despite a function to add it as video
7 septembre 2023, par EarthlingI'm trying to upload a .gif to my django 3.2 api. I have already ran troubleshoots through Postman and came to the conclusion that my flutter app sends it as a .gif and it gets returned as a .jpg. The problem is on the backend. Here is my relevant code which checks for file_meme subtype and then the function should convert the incoming .gif to a video :




def add_media(self, file, order=None):
 check_can_add_media(post=self)

 is_in_memory_file = isinstance(file, InMemoryUploadedFile) or isinstance(file, SimpleUploadedFile)

 if is_in_memory_file:
 file_mime = magic.from_buffer(file.read())
 elif isinstance(file, TemporaryUploadedFile):
 file_mime = magic.from_file(file.temporary_file_path())
 else:
 file_mime = magic.from_file(file.name)

 check_mimetype_is_supported_media_mimetypes(file_mime)
 # Mime check moved pointer
 file.seek(0)

 file_mime_types = file_mime.split('/')

 file_mime_type = file_mime_types[0]
 file_mime_subtype = file_mime_types[1]

 temp_files_to_close = []

 if file_mime_subtype == 'gif':
 if is_in_memory_file:
 file = write_in_memory_file_to_disk(file)

 temp_dir = tempfile.gettempdir()
 converted_gif_file_name = os.path.join(temp_dir, str(uuid.uuid4()) + '.mp4')

 ff = ffmpy.FFmpeg(
 inputs={file.temporary_file_path() if hasattr(file, 'temporary_file_path') else file.name: None},
 outputs={converted_gif_file_name: None})
 ff.run()
 converted_gif_file = open(converted_gif_file_name, 'rb')
 temp_files_to_close.append(converted_gif_file)
 file = File(file=converted_gif_file)
 file_mime_type = 'video'

 has_other_media = self.media.exists()
 
 if file_mime_type == 'image':
 post_image = self._add_media_image(image=file, order=order)
 if not has_other_media:
 self.media_width = post_image.width
 self.media_height = post_image.height
 self.media_thumbnail = file

 elif file_mime_type == 'video':
 post_video = self._add_media_video(video=file, order=order)
 if not has_other_media:
 self.media_width = post_video.width
 self.media_height = post_video.height
 self.media_thumbnail = post_video.thumbnail.file
 else:
 raise ValidationError(
 _('Unsupported media file type')
 )

 for file_to_close in temp_files_to_close:
 file_to_close.close()
 
 
 self.save()







def _add_media_image(self, image, order):
 return PostImage.create_post_media_image(image=image, post_id=self.pk, order=order)

def _add_media_video(self, video, order):
 return PostVideo.create_post_media_video(file=video, post_id=self.pk, order=order)


@classmethod
 def create_post_media_image(cls, image, post_id, order):
 hash = sha256sum(file=image.file)
 post_image = cls.objects.create(image=image, post_id=post_id, hash=hash, thumbnail=image)
 PostMedia.create_post_media(type=PostMedia.MEDIA_TYPE_IMAGE,
 content_object=post_image,
 post_id=post_id, order=order)
 return post_image


@classmethod
 def create_post_media_video(cls, file, post_id, order):
 hash = sha256sum(file=file.file)
 video_backend = get_backend()

 if isinstance(file, InMemoryUploadedFile):
 # If its in memory, doing read shouldn't be an issue as the file should be small.
 in_disk_file = write_in_memory_file_to_disk(file)
 thumbnail_path = video_backend.get_thumbnail(video_path=in_disk_file.name, at_time=0.0)
 else:
 thumbnail_path = video_backend.get_thumbnail(video_path=file.file.name, at_time=0.0)

 with open(thumbnail_path, 'rb+') as thumbnail_file:
 post_video = cls.objects.create(file=file, post_id=post_id, hash=hash, thumbnail=File(thumbnail_file), )
 PostMedia.create_post_media(type=PostMedia.MEDIA_TYPE_VIDEO,
 content_object=post_video,
 post_id=post_id, order=order)
 return post_video
 
 



I'm not sure where the problem is. From my limited understanding, it is taking only the first frame of the .gif and uploading it as an image.


-
Google Speech - Streaming Request Returns EOF
9 octobre 2017, par JoshUsing Go, I’m taking a RTMP stream, transcoding it to FLAC (using ffmpeg) and attempting to stream to Google’s Speech API to transcribe the audio. However, I keep getting
EOF
errors when sending the data. I can’t find any information on this error in the docs so I’m not exactly sure what’s causing it.I’m chunking the received data into 3s clips (length isn’t relevant as long as it’s less than the maximum length of a streaming recognition request).
Here is the core of my code :
func main() {
done := make(chan os.Signal)
received := make(chan []byte)
go receive(received)
go transcribe(received)
signal.Notify(done, os.Interrupt, syscall.SIGTERM)
select {
case <-done:
os.Exit(0)
}
}
func receive(received chan<- []byte) {
var b bytes.Buffer
stdout := bufio.NewWriter(&b)
cmd := exec.Command("ffmpeg", "-i", "rtmp://127.0.0.1:1935/live/key", "-f", "flac", "-ar", "16000", "-")
cmd.Stdout = stdout
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
duration, _ := time.ParseDuration("3s")
ticker := time.NewTicker(duration)
for {
select {
case <-ticker.C:
stdout.Flush()
log.Printf("Received %d bytes", b.Len())
received <- b.Bytes()
b.Reset()
}
}
}
func transcribe(received <-chan []byte) {
ctx := context.TODO()
client, err := speech.NewClient(ctx)
if err != nil {
log.Fatal(err)
}
stream, err := client.StreamingRecognize(ctx)
if err != nil {
log.Fatal(err)
}
// Send the initial configuration message.
if err = stream.Send(&speechpb.StreamingRecognizeRequest{
StreamingRequest: &speechpb.StreamingRecognizeRequest_StreamingConfig{
StreamingConfig: &speechpb.StreamingRecognitionConfig{
Config: &speechpb.RecognitionConfig{
Encoding: speechpb.RecognitionConfig_FLAC,
LanguageCode: "en-GB",
SampleRateHertz: 16000,
},
},
},
}); err != nil {
log.Fatal(err)
}
for {
select {
case data := <-received:
if len(data) > 0 {
log.Printf("Sending %d bytes", len(data))
if err := stream.Send(&speechpb.StreamingRecognizeRequest{
StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
AudioContent: data,
},
}); err != nil {
log.Printf("Could not send audio: %v", err)
}
}
}
}
}Running this code gives this output :
2017/10/09 16:05:00 Received 191704 bytes
2017/10/09 16:05:00 Saving 191704 bytes
2017/10/09 16:05:00 Sending 191704 bytes
2017/10/09 16:05:00 Could not send audio: EOF
2017/10/09 16:05:03 Received 193192 bytes
2017/10/09 16:05:03 Saving 193192 bytes
2017/10/09 16:05:03 Sending 193192 bytes
2017/10/09 16:05:03 Could not send audio: EOF
2017/10/09 16:05:06 Received 193188 bytes
2017/10/09 16:05:06 Saving 193188 bytes
2017/10/09 16:05:06 Sending 193188 bytes // Notice that this doesn't error
2017/10/09 16:05:09 Received 191704 bytes
2017/10/09 16:05:09 Saving 191704 bytes
2017/10/09 16:05:09 Sending 191704 bytes
2017/10/09 16:05:09 Could not send audio: EOFNotice that not all of the
Send
s fail.Could anyone point me in the right direction here ? Is it something to do with the FLAC headers or something ? I also wonder if maybe resetting the buffer causes some of the data to be dropped (i.e. it’s a non-trivial operation that actually takes some time to complete) and it doesn’t like this missing information ?
Any help would be really appreciated.
-
Revision 1161055129 : Be consistent with SAD values SAD returns unsigned values. Make all the declara
26 juin 2012, par JohannChanged Paths : Modify /test/sad_test.cc Modify /vp8/common/mfqe.c Modify /vp8/common/rtcd_defs.sh Modify /vp8/common/sad_c.c Modify /vp8/common/variance.h Modify /vp8/common/x86/sad_sse2.asm Modify /vp8/encoder/mcomp.c Modify /vp8/encoder/rdopt.c Be consistent with SAD (...)