Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SR-7733] Casting numbers giving nil #3700

Closed
swift-ci opened this issue May 21, 2018 · 3 comments
Closed

[SR-7733] Casting numbers giving nil #3700

swift-ci opened this issue May 21, 2018 · 3 comments

Comments

@swift-ci
Copy link
Contributor

Previous ID SR-7733
Radar None
Original Reporter YANOUSHek (JIRA User)
Type Bug
Status Resolved
Resolution Invalid
Environment

Apple Swift version 4.1 (swiftlang-902.0.48 clang-902.0.37.1)
Target: x86_64-apple-darwin17.5.0

Running on MacBook Pro (13-inch, 2016, Two Thunderbolt 3 ports) with macOS 10.13.4 (17E202); Xcode: Version 9.3.1 (9E501)

Additional Detail from JIRA
Votes 0
Component/s Foundation, Standard Library
Labels Bug
Assignee None
Priority Medium

md5: eaf4c498641b6112ee4c7dc40fff5a78

Issue Description:

There's been a change in how Swift handles casting numerical values and it's either really buggy or not intuitive. Please see examples below. All testing was done with:

Apple Swift version 4.1 (swiftlang-902.0.48 clang-902.0.37.1)
Target: x86_64-apple-darwin17.5.0

> let x = 0.1
x: Double = 0.10000000000000001

> let x: Float = 0.1
x: Float = 0.100000001

> let x = 0.1 as Float
x: Float = 0.100000001

> let x = 0.1 as? Float
x: Float? = nil

> Int8(1) as? Int16
$R1: Int16? = nil

On one hand this looks like casting between types has been disabled - no matter if I'm upcasting to a type that can hold specific values (i.e. casting Int8 to Int16) or the other way (which might be blocked since results may vary depending on runtime values).

On the other hand implicit conversions of float literals (which I believe are Doubles in Swift) to Float as in the second and third examples work without a problem.

I can't really understand the logic between `0.1 as Float` and `0.1 as? Float` giving completely different values.

The problem extends to NSNumber objects which behave really weird regarding casting.

> NSNumber(value: 0.1)
$R2: NSNumber = Double(0.1)

> NSNumber(value: 0.1) as? Float
$R3: Float? = nil

> NSNumber(value: 13)
$R4: __NSCFNumber = Int64(13)

> NSNumber(value: 13) as? Float
$R5: Float? = 13

> Int64(13) as? Float
$R6: Float? = nil

Seeing these tests in action I can't really be sure when it's possible to cast an NSNumber to Float. Why is it possible to cast NSNumber with a value of 13 to Float while Int64 with the same value isn't possible.

Can someone please let me know what's going on here?

@belkadan
Copy link

This is correct behavior. In Swift, literals don't have an intrinsic type, just a default one. So when you say 1.0 as Float or even 10 as Float, the compiler knows that the value should be a Float. It doesn't start with a Double or Int and then convert. 'as?', however, is only used for converting between runtime values, and so the compiler falls back to the default type for a particular literal. And indeed, different basic number types cannot be converted to one another in Swift.

NSNumber does come along and confuse things. It's mostly just there for compatibility with Objective-C, but it can hold any of the basic number types. Because of that, it's possible to convert to and from NSNumber.

@swift-ci
Copy link
Contributor Author

Comment by Janusz Bossy (JIRA)

My point here is that there's a lot of inconsistency between the behavior. The simplest example I can create is:

1> let x = NSNumber(value: 0.1)
x: NSNumber = Double(0.1)
2> x.floatValue
$R0: Float = 0.100000001
3> x as? Float
$R1: Float? = nil

NSNumber does return a floatValue but cannot be casted to a Float.

The problem happens mostly when working with JSON based APIs – the data format does not give information about types so parsers use NSNumber to make sure everything works properly. Then it turns out I run into a lot of problems because values can or cannot be casted to Floats or any different numeric type and all of this depends on what data was actually sent from the server.

@belkadan
Copy link

@phausler can probably explain this better than me, but the basic idea is that a cast to Float will fail whenever doing so would lose precision. Since "0.1" (a value written in JSON) can't be precisely represented by a binary floating-point value, both the Float and Double representations are approximations. The JSON parser in Foundation uses Double for all floating-point values because it's a better approximation.

@swift-ci swift-ci transferred this issue from apple/swift-issues Apr 25, 2022
@shahmishal shahmishal transferred this issue from apple/swift May 5, 2022
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants