javascript - Wrong ELF class - Python -
I am trying to establish this library for compression LZJB .
Unfortunately, when I import and import into the site-package directory, I'm binding on the Library C library for "
, the file is located here
wrong ELF class "error .
& gt; & Gt; & Gt; Import POLJJBe traceback (most recent call final): File "& lt; stdin>", line 1, & lt; Module & gt; Import Array: ./PyLZJB.so: Incorrect ELF class: ELFCLASS32
Help will be great. :)
PS: I'm editing Ubuntu 10.4 64bit
:
If someone could recommend me an alternative compression algorithm I would be equally happy. :)
The algorithm is for html compression , and it requires client side JavaScript deccps / compression support too.
I really hope someone can help if someone could recommend me alternative compression algorithms, I would be equally happy.
A lot more general of the LGED Compression family Members always have good old defaults. . .
In relatively slow client-side code it is a lot of overhead that submission data can be compressed, and it is not trivial to submit raw bytes received from it.
Do they receive the Gzip parameter within a request?
Receiving form submissions in the query string should be much lower than the nature, or you will eliminate the length or limitation of the browser or server URL, there is nothing that is small. If you have a lot of data, then he has to go to a post form.
Also in a post form, the default is enctype
application / x-www-form-urlencoded
, which means that most bytes are % Nn
is being encoded as sequences, it must submit your form, perhaps beyond the original uncompressed size. You must use a enctype = "multipart / form-data"
form to submit raw byte. However, you are about to have encoding problems. JS strings are not Unicode bytes, and will be encoded using the form page encoding. Generally UTF-8 should be, but then you can not actually create an arbitrary sequence of bytes to upload by encoding, because many byte sequences in UTF-8 are not valid. You can bytes-in-unicode each byte by encoding it as a code unit in UTF-8, but it will reduce your compressed bytes to 50% (because half the code units, which is more than 0x80
In theory, if you do not want to lose the right internationalization support, then you can serve the page as ISO-8859-1 and use the escape / encodeurIComponent. To convert between UTF-8 and ISO-8859-1 for output,
idioms. But this will not work because the browser is lying and IO-885 9 -1 actually uses the Windows code page 1252 for the encoding / decoding content marked as you can use another encoding which can be used by every byte Mapped to a character, but it will be a more manual encoding overhead and lets you use that page.
You can avoid encoding problems by using something like Base64, but again, then, you have more manual encoding performance overhead and 33% blota.
In summary, all approaches are bad; I do not think you can be very useful with this.
Comments
Post a Comment