Ruby equivalent of Node .toString('ascii')












1















I am struggling with converting a Node application to Ruby. I have a Buffer of integers that I need to encode as an ASCII string.



In Node this is done like this:



const a = Buffer([53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76])
const b = a.toString('hex')
// b = "357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"
const c = a.toString('ascii')
// c = '5qx9bpR"Hou0004.u0012R>u0005XPa:u000bj|vu0013?Tu001e~xL'


I want to get the same output in Ruby but I don't know how to convert a to c. I used b to validate that a is parsed the same in Ruby and Node and it looks like it's working.



a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].pack('C*')
b = a.unpack('H*')
# ["357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"]
# c = ???


I have tried serveral things, virtually all of the unpack options, and I also tried using the encode function but I lack the understanding of what the problem is here.










share|improve this question





























    1















    I am struggling with converting a Node application to Ruby. I have a Buffer of integers that I need to encode as an ASCII string.



    In Node this is done like this:



    const a = Buffer([53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76])
    const b = a.toString('hex')
    // b = "357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"
    const c = a.toString('ascii')
    // c = '5qx9bpR"Hou0004.u0012R>u0005XPa:u000bj|vu0013?Tu001e~xL'


    I want to get the same output in Ruby but I don't know how to convert a to c. I used b to validate that a is parsed the same in Ruby and Node and it looks like it's working.



    a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].pack('C*')
    b = a.unpack('H*')
    # ["357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"]
    # c = ???


    I have tried serveral things, virtually all of the unpack options, and I also tried using the encode function but I lack the understanding of what the problem is here.










    share|improve this question



























      1












      1








      1








      I am struggling with converting a Node application to Ruby. I have a Buffer of integers that I need to encode as an ASCII string.



      In Node this is done like this:



      const a = Buffer([53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76])
      const b = a.toString('hex')
      // b = "357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"
      const c = a.toString('ascii')
      // c = '5qx9bpR"Hou0004.u0012R>u0005XPa:u000bj|vu0013?Tu001e~xL'


      I want to get the same output in Ruby but I don't know how to convert a to c. I used b to validate that a is parsed the same in Ruby and Node and it looks like it's working.



      a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].pack('C*')
      b = a.unpack('H*')
      # ["357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"]
      # c = ???


      I have tried serveral things, virtually all of the unpack options, and I also tried using the encode function but I lack the understanding of what the problem is here.










      share|improve this question
















      I am struggling with converting a Node application to Ruby. I have a Buffer of integers that I need to encode as an ASCII string.



      In Node this is done like this:



      const a = Buffer([53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76])
      const b = a.toString('hex')
      // b = "357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"
      const c = a.toString('ascii')
      // c = '5qx9bpR"Hou0004.u0012R>u0005XPa:u000bj|vu0013?Tu001e~xL'


      I want to get the same output in Ruby but I don't know how to convert a to c. I used b to validate that a is parsed the same in Ruby and Node and it looks like it's working.



      a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].pack('C*')
      b = a.unpack('H*')
      # ["357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"]
      # c = ???


      I have tried serveral things, virtually all of the unpack options, and I also tried using the encode function but I lack the understanding of what the problem is here.







      node.js ruby






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 14 '18 at 18:28









      jonrsharpe

      77.7k11104213




      77.7k11104213










      asked Nov 14 '18 at 18:07









      Pablo JomerPablo Jomer

      4,00453280




      4,00453280
























          1 Answer
          1






          active

          oldest

          votes


















          2














          Okay well I am not that familiar with Node.js but you can get fairly close with some basic understandings:



          Node states:




          'ascii' - For 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.




          Update After rereading the nod.js description I think it just means it will drop 127 and only focus on the first 7 bits so this can be simplified to:



          def node_js_ascii(bytes) 
          bytes.map {|b| b % 128 }
          .reject(&127.method(:==))
          .pack('C*')
          .encode(Encoding::UTF_8)
          end
          node_js_ascii(a)
          #=> #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now the only differences are that node.js uses "u000b" to represent a vertical tab and ruby uses "v" and that ruby uses uppercase characters for unicode rather than lowercase ("u001E" vs "u001e") (you could handle this if you so chose)



          Please note This form of encoding is not reversible due to the fact that you have characters that are greater than 8 bits in your byte array.



          TL;DR (previous explanation and solution only works up to 8 bits)



          Okay so we know the max supported decimal is 127 ("1111111".to_i(2)) and that node will strip the high bit if set meaning [I am assuming] 241 (an 8 bit number will become 113 if we strip the high bit)



          With that understanding we can use:



          a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 128 ? b : b - 128
          end.pack('C*')
          #=> "5x7Fqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"


          Then we can encode that as UTF-8 like so:



          a.encode(Encoding::UTF_8)
          #=> "5u007Fqx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          but there is still is still an issue here.



          It seems Node.js also ignores the Delete (127) when it converts to 'ascii' (I mean the high bit is set but if we strip it then it is 63 ("?") which doesn't match the output) so we can fix that too



           a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 127 ? b : b - 128
          end.pack('C*')
          #=> "5xFFqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"
          a.encode(Encoding::UTF_8, undef: :replace, replace: '')
          #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now since 127 - 128 = -1 (negative signed bit) becomes "xFF" an undefined character in UTF-8 so we add undef: :replace what to do when the character is undefined use replace and we add replace: '' to replace with nothing.






          share|improve this answer


























          • All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

            – Pablo Jomer
            Nov 15 '18 at 19:00






          • 1





            @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

            – engineersmnky
            Nov 15 '18 at 19:02













          • I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

            – Pablo Jomer
            Nov 15 '18 at 19:04











          • Thanks for the help I bet it will help some one in the future.

            – Pablo Jomer
            Nov 15 '18 at 19:04











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53306350%2fruby-equivalent-of-node-tostringascii%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          Okay well I am not that familiar with Node.js but you can get fairly close with some basic understandings:



          Node states:




          'ascii' - For 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.




          Update After rereading the nod.js description I think it just means it will drop 127 and only focus on the first 7 bits so this can be simplified to:



          def node_js_ascii(bytes) 
          bytes.map {|b| b % 128 }
          .reject(&127.method(:==))
          .pack('C*')
          .encode(Encoding::UTF_8)
          end
          node_js_ascii(a)
          #=> #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now the only differences are that node.js uses "u000b" to represent a vertical tab and ruby uses "v" and that ruby uses uppercase characters for unicode rather than lowercase ("u001E" vs "u001e") (you could handle this if you so chose)



          Please note This form of encoding is not reversible due to the fact that you have characters that are greater than 8 bits in your byte array.



          TL;DR (previous explanation and solution only works up to 8 bits)



          Okay so we know the max supported decimal is 127 ("1111111".to_i(2)) and that node will strip the high bit if set meaning [I am assuming] 241 (an 8 bit number will become 113 if we strip the high bit)



          With that understanding we can use:



          a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 128 ? b : b - 128
          end.pack('C*')
          #=> "5x7Fqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"


          Then we can encode that as UTF-8 like so:



          a.encode(Encoding::UTF_8)
          #=> "5u007Fqx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          but there is still is still an issue here.



          It seems Node.js also ignores the Delete (127) when it converts to 'ascii' (I mean the high bit is set but if we strip it then it is 63 ("?") which doesn't match the output) so we can fix that too



           a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 127 ? b : b - 128
          end.pack('C*')
          #=> "5xFFqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"
          a.encode(Encoding::UTF_8, undef: :replace, replace: '')
          #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now since 127 - 128 = -1 (negative signed bit) becomes "xFF" an undefined character in UTF-8 so we add undef: :replace what to do when the character is undefined use replace and we add replace: '' to replace with nothing.






          share|improve this answer


























          • All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

            – Pablo Jomer
            Nov 15 '18 at 19:00






          • 1





            @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

            – engineersmnky
            Nov 15 '18 at 19:02













          • I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

            – Pablo Jomer
            Nov 15 '18 at 19:04











          • Thanks for the help I bet it will help some one in the future.

            – Pablo Jomer
            Nov 15 '18 at 19:04
















          2














          Okay well I am not that familiar with Node.js but you can get fairly close with some basic understandings:



          Node states:




          'ascii' - For 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.




          Update After rereading the nod.js description I think it just means it will drop 127 and only focus on the first 7 bits so this can be simplified to:



          def node_js_ascii(bytes) 
          bytes.map {|b| b % 128 }
          .reject(&127.method(:==))
          .pack('C*')
          .encode(Encoding::UTF_8)
          end
          node_js_ascii(a)
          #=> #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now the only differences are that node.js uses "u000b" to represent a vertical tab and ruby uses "v" and that ruby uses uppercase characters for unicode rather than lowercase ("u001E" vs "u001e") (you could handle this if you so chose)



          Please note This form of encoding is not reversible due to the fact that you have characters that are greater than 8 bits in your byte array.



          TL;DR (previous explanation and solution only works up to 8 bits)



          Okay so we know the max supported decimal is 127 ("1111111".to_i(2)) and that node will strip the high bit if set meaning [I am assuming] 241 (an 8 bit number will become 113 if we strip the high bit)



          With that understanding we can use:



          a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 128 ? b : b - 128
          end.pack('C*')
          #=> "5x7Fqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"


          Then we can encode that as UTF-8 like so:



          a.encode(Encoding::UTF_8)
          #=> "5u007Fqx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          but there is still is still an issue here.



          It seems Node.js also ignores the Delete (127) when it converts to 'ascii' (I mean the high bit is set but if we strip it then it is 63 ("?") which doesn't match the output) so we can fix that too



           a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 127 ? b : b - 128
          end.pack('C*')
          #=> "5xFFqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"
          a.encode(Encoding::UTF_8, undef: :replace, replace: '')
          #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now since 127 - 128 = -1 (negative signed bit) becomes "xFF" an undefined character in UTF-8 so we add undef: :replace what to do when the character is undefined use replace and we add replace: '' to replace with nothing.






          share|improve this answer


























          • All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

            – Pablo Jomer
            Nov 15 '18 at 19:00






          • 1





            @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

            – engineersmnky
            Nov 15 '18 at 19:02













          • I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

            – Pablo Jomer
            Nov 15 '18 at 19:04











          • Thanks for the help I bet it will help some one in the future.

            – Pablo Jomer
            Nov 15 '18 at 19:04














          2












          2








          2







          Okay well I am not that familiar with Node.js but you can get fairly close with some basic understandings:



          Node states:




          'ascii' - For 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.




          Update After rereading the nod.js description I think it just means it will drop 127 and only focus on the first 7 bits so this can be simplified to:



          def node_js_ascii(bytes) 
          bytes.map {|b| b % 128 }
          .reject(&127.method(:==))
          .pack('C*')
          .encode(Encoding::UTF_8)
          end
          node_js_ascii(a)
          #=> #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now the only differences are that node.js uses "u000b" to represent a vertical tab and ruby uses "v" and that ruby uses uppercase characters for unicode rather than lowercase ("u001E" vs "u001e") (you could handle this if you so chose)



          Please note This form of encoding is not reversible due to the fact that you have characters that are greater than 8 bits in your byte array.



          TL;DR (previous explanation and solution only works up to 8 bits)



          Okay so we know the max supported decimal is 127 ("1111111".to_i(2)) and that node will strip the high bit if set meaning [I am assuming] 241 (an 8 bit number will become 113 if we strip the high bit)



          With that understanding we can use:



          a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 128 ? b : b - 128
          end.pack('C*')
          #=> "5x7Fqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"


          Then we can encode that as UTF-8 like so:



          a.encode(Encoding::UTF_8)
          #=> "5u007Fqx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          but there is still is still an issue here.



          It seems Node.js also ignores the Delete (127) when it converts to 'ascii' (I mean the high bit is set but if we strip it then it is 63 ("?") which doesn't match the output) so we can fix that too



           a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 127 ? b : b - 128
          end.pack('C*')
          #=> "5xFFqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"
          a.encode(Encoding::UTF_8, undef: :replace, replace: '')
          #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now since 127 - 128 = -1 (negative signed bit) becomes "xFF" an undefined character in UTF-8 so we add undef: :replace what to do when the character is undefined use replace and we add replace: '' to replace with nothing.






          share|improve this answer















          Okay well I am not that familiar with Node.js but you can get fairly close with some basic understandings:



          Node states:




          'ascii' - For 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.




          Update After rereading the nod.js description I think it just means it will drop 127 and only focus on the first 7 bits so this can be simplified to:



          def node_js_ascii(bytes) 
          bytes.map {|b| b % 128 }
          .reject(&127.method(:==))
          .pack('C*')
          .encode(Encoding::UTF_8)
          end
          node_js_ascii(a)
          #=> #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now the only differences are that node.js uses "u000b" to represent a vertical tab and ruby uses "v" and that ruby uses uppercase characters for unicode rather than lowercase ("u001E" vs "u001e") (you could handle this if you so chose)



          Please note This form of encoding is not reversible due to the fact that you have characters that are greater than 8 bits in your byte array.



          TL;DR (previous explanation and solution only works up to 8 bits)



          Okay so we know the max supported decimal is 127 ("1111111".to_i(2)) and that node will strip the high bit if set meaning [I am assuming] 241 (an 8 bit number will become 113 if we strip the high bit)



          With that understanding we can use:



          a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 128 ? b : b - 128
          end.pack('C*')
          #=> "5x7Fqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"


          Then we can encode that as UTF-8 like so:



          a.encode(Encoding::UTF_8)
          #=> "5u007Fqx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          but there is still is still an issue here.



          It seems Node.js also ignores the Delete (127) when it converts to 'ascii' (I mean the high bit is set but if we strip it then it is 63 ("?") which doesn't match the output) so we can fix that too



           a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b| 
          b < 127 ? b : b - 128
          end.pack('C*')
          #=> "5xFFqx9bpR"Hox04.x12R>x05XPa:vj|vx13?Tx1E~xL"
          a.encode(Encoding::UTF_8, undef: :replace, replace: '')
          #=> "5qx9bpR"Hou0004.u0012R>u0005XPa:vj|vu0013?Tu001E~xL"


          Now since 127 - 128 = -1 (negative signed bit) becomes "xFF" an undefined character in UTF-8 so we add undef: :replace what to do when the character is undefined use replace and we add replace: '' to replace with nothing.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 15 '18 at 13:39

























          answered Nov 14 '18 at 22:11









          engineersmnkyengineersmnky

          13.5k12239




          13.5k12239













          • All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

            – Pablo Jomer
            Nov 15 '18 at 19:00






          • 1





            @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

            – engineersmnky
            Nov 15 '18 at 19:02













          • I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

            – Pablo Jomer
            Nov 15 '18 at 19:04











          • Thanks for the help I bet it will help some one in the future.

            – Pablo Jomer
            Nov 15 '18 at 19:04



















          • All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

            – Pablo Jomer
            Nov 15 '18 at 19:00






          • 1





            @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

            – engineersmnky
            Nov 15 '18 at 19:02













          • I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

            – Pablo Jomer
            Nov 15 '18 at 19:04











          • Thanks for the help I bet it will help some one in the future.

            – Pablo Jomer
            Nov 15 '18 at 19:04

















          All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

          – Pablo Jomer
          Nov 15 '18 at 19:00





          All though this seems to be a working solution I ended up not having to use it. Still I can verify that this works.

          – Pablo Jomer
          Nov 15 '18 at 19:00




          1




          1





          @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

          – engineersmnky
          Nov 15 '18 at 19:02







          @PabloJomer I honestly would not recommend using it simply for the fact that it can not be converted back unless all characters are less than 8 bits and not 127

          – engineersmnky
          Nov 15 '18 at 19:02















          I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

          – Pablo Jomer
          Nov 15 '18 at 19:04





          I understood that. The reason we needed it was because we needed to compare this string that was generated by a system that was not under our control. But it turned out that we could use another method where this was not necessary.

          – Pablo Jomer
          Nov 15 '18 at 19:04













          Thanks for the help I bet it will help some one in the future.

          – Pablo Jomer
          Nov 15 '18 at 19:04





          Thanks for the help I bet it will help some one in the future.

          – Pablo Jomer
          Nov 15 '18 at 19:04




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53306350%2fruby-equivalent-of-node-tostringascii%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Florida Star v. B. J. F.

          Danny Elfman

          Lugert, Oklahoma